Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 2023 International Conference on Machine Learning and Automation

    Conference Date

    2023-10-18

    Website

    https://2023.confmla.org/

    Notes

     

    ISBN

    978-1-83558-297-8 (Print)

    978-1-83558-298-5 (Online)

    Published Date

    2024-02-07

    Editors

    Mustafa İSTANBULLU, Cukurova University

Articles

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230415

    Automated valuation of used sailboat prices based on random forest regression modeling

    This study presents a machine learning method for regression prediction of used sailboat prices. The dataset contains attributes such as brand, length, year, and listing price of the sailboat, and the dataset is preprocessed by removing irrelevant fields and normalizing the data. A random forest model is constructed and evaluated against several models such as gradient boosting and neural networks through k-fold cross-validation. Random Forest performs well compared to other models. The ensemble approach of the algorithm effectively modeled the complex nonlinear relationships in the data. Rigorous validation ensures the generalizability of the model. The Random Forest model outperforms traditional manual assessments in terms of the accuracy of price assessments. This data-driven solution allows customers to value sailboats on their own and avoid paying excessive fees. It also allows sailboat companies to develop automated pricing systems to speed up operations. This research provides a powerful machine-learning approach for accurately predicting used sailboat prices. These techniques can be extended to other regression tasks. Further work includes refining the model and deploying real-world applications.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230416

    Handwritten digit recognition based on deep learning techniques

    The identification of handwritten digits in images recognition and machine learning is a prominent research area. In order to create a handwritten digit recognition model for this investigation, deep learning is introduced. The proposed approach integrates Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and EfficientNetB0, three separate deep learning models. Specifically, the CNN model utilizes pooling layers and data augmentation techniques to enhance its classification ability, the RNN model takes advantage of its ability to process sequential data, and the EfficientNetB0 model benefits from a deeper and more complex network structure. These models are trained and evaluated using the Modified National Institute of Standards and Technology (MNIST) dataset. The experimental results demonstrate the efficacy of the proposed approach: the CNN model attains a remarkable accuracy of 98.9% on the test set, thereby showcasing its exceptional classification performance. Similarly, the RNN model achieves an accuracy of 96.7%, underscoring its suitability for analyzing sequential data. Furthermore, the EfficientNetB0 model attains an accuracy of 98.1%, thereby elucidating the benefits of the deeper network architecture. The models constructed in this study have significant real-world implications, such as improved object recognition systems, medical diagnostics, and autonomous driving. The EfficientNetB0 model produced an accuracy of 98.1% with its complex network architecture when applied to recognise handwritten digits.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230418

    Analysis of channel performance in modern digital communication technology and understanding the enhancement of channel performance

    With the rapid development of the information age, the demands on communication technology continue to grow. As a crucial communication method, digital communication systems play a vital role in achieving efficient transmission speeds and reliability. This paper centers around modulation techniques within digital communication systems, with a specific emphasis on the analysis of reducing bit error rates and enhancing transmission speeds. Consequently, this study delves into the current methods employed for achieving an all-encompassing optimization of both transmission speed and reliability, while also proposing novel insights. The paper will detail the benefits and drawbacks of contemporary noise processing techniques that aim to enhance channel performance and explore methods for their improvement.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230419

    A comprehensive view into MPSK modulation classification

    This paper provides a relative comprehensive overview on current modulation classification of MPSK from both the likelihood-based perspective and the feature-based perspective. Traditional methods based on maximum likelihood (ML) method mainly diverges in the way how unresolved parameters from the received signal are viewed. Some recent work adopting feature recognition such as SVM-based and Deep Learning-based classifying algorithms are also introduced. Fundamental equations are also provided for each method. This paper makes comparison among different methods in each section and explained the preferred utilization circumstance of each, aiming to help readers find the best algorithm in each of their specific case. Moreover, advantages and disadvantages of different algorithms are clearly stated for the readers’ information. Based on the pros and cons, it is also suggested for readers to develop new compound algorithms of better functionality for further research.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230420

    An angle of arrival estimation method for intelligent metasurface using machine learning

    Today, techniques for altering electromagnetic waves and the data they transport are increasingly crucial. These techniques have become increasingly important in several communication technologies, such as intelligent metasurface, with the rise and development of next-generation wireless communication systems. Since those technologies call for locating devices, angle of arrival (AoA) analysis becomes a crucial area for research. The AoA can now be shown in various ways, and some subspace methods have already achieved great precision. However, the extensive calculation required by those techniques is one of their key shortcomings. The author developed a novel radio frequency (RF) switching circuit design in this research and applied a novel machine-learning method to signal AoA. In single-wave settings, it was discovered that the proposed machine learning method performed satisfactorily with an average difference of 0.6 degrees.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230421

    Spectrum map construction optimisation schemes: Sampling and prediction

    The proliferation of electromagnetic devices presents a significant challenge in developing effective techniques for spectrum monitoring, management, and security. The utilization of spectrum cartography has been acknowledged as a viable approach to address the aforementioned difficulties. This latter presents a variety of techniques aimed at enhancing the efficiency of the current spectrum mapping methodology. The subject matter can be categorized into two primary components, namely sampling and spectrum prediction. Sampling part includes methods to find the most valuable sampling points and methods of sampling hardware optimization. Spectrum prediction includes algorithms utilizing frequency-spatial reasoning techniques to estimate the target spectrum map by data from the nearby area, and algorithms utilizing ROSMP framework to estimate the spectrum map from past data. The introduction of techniques is divided into the 2 types, together with key algorithms and devices used in each method. Additionally, the letter lists some drawbacks of certain methods and discuss their development prospects.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230422

    Research on sound signal filtering and processing delay based on multiple receivers

    With the rapid development of modern communication and audio technology, sound signal processing has become more and more important. This paper deeply explores the potential of multi-receiver audio signal processing technology based on practical application scenarios, and studies the effect of beamforming technology and adaptive noise cancellation algorithm on audio signal quality improvement. Experimental results show that with proper technology selection and framework design, the audio signal processing effect in complex environments can be significantly improved. In addition, this paper also predicts the trend of signal filtering and processing in the future and puts forward suggestions for the application of deep learning in this field, the research of adaptive algorithms, the fusion of multi-sensor information, the optimization of computational efficiency, and the establishment of real scene simulation. Overall, sound signal processing has great potential and opportunities in practical applications, which are worthy of further research and exploration.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230423

    Comparisons of PSK, APSK, and QAM over AWGN and fading channels

    In addition to the thermal noise, communication systems can normally experience various fading because of objects reflecting the signal. PSK, APSK, and QAM modulation schemes are widely used in communication systems. It is thus essential to know how well these schemes can perform in different fading channels. This research explores the BER performance of these modulation schemes in common and typical channel models including AWGN, Rayleigh fading, Rician fading, and Nakagami fading channels by simulations in MATLAB. The BER curves over a range of SNR and symbol constellation diagrams are obtained. It is found that Rayleigh and Nakagami fading distort signals most and impacts of Rician fading in LoS case and AWGN can be mitigated significantly by increasing the SNR. Furthermore, the QAM has better BER performance in fading channels while PSK and APSK perform better in AWGN channels when the number of bits of one symbol is relatively small. Selections of modulation schemes should depend on the specific circumstances and the optimization of them is required when large numbers of bits are transmitted by one symbol.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230424

    Optimization of human-machine interface for fatigue driving problem

    Amidst rapid motorization, the surge in serious traffic accidents has raised concerns about the significant contribution of fatigued driving to road safety. However, the current vehicle-machine interface for fatigue driving reminder is relatively simplistic and plays a weak role. This study aims to optimize the functionality of traditional in-vehicle HMIs by exploring the key factors of human-computer interaction (HCI) and developing targeted user interfaces to effectively alert and reduce driver fatigue. A quantitative analysis based on previous experimental data is conducted to model the correlation between interface design factors (such as simplicity and feedback clarity) and physical fatigue parameters. An integrated user interface with fatigue alerts, rest area navigation, driver assistance, air conditioning settings, and voice control modules is proposed. Compared to the traditional interface, the improved user interface is evaluated in simulated driving conditions using an A/B experiment. The new user interface is expected to demonstrate improved effectiveness in relieving driver fatigue by providing clear visual, audio and haptic feedback. This research contributes a structured methodology for applying HCI principles to optimize in-vehicle interface design for mitigating driver fatigue, providing a framework to inform future interface development and enhance road safety.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230425

    Current study on human-computer interaction in machine learning

    Machine learning has become one of the research hotspots at home and internationally due to the continued growth of artificial intelligence, and the application of machine learning is more and more widely developed. In the process of applying machine learning methods to real problems, there are defects that lead to biased results. This paper discusses the importance and necessity of human-machine interaction in the application of machine learning methods, as well as where human-machine interaction occurs, and puts forward two questions: "whether human should interact with machine in the process of machine learning" and "how to make machine learning have better performance". To answer the above two questions, this paper concludes that in the application of machine learning methods, people with certain professional knowledge can get better results in the machine learning process. Further, when machine learning is applied to the real world, there are some flaws that lead to failure or unsatisfactory results, and this paper proposes a way to improve this undesirable phenomenon by involving people in the machine learning process. Finally, this paper summarizes the main shortcomings of current machine learning, clarifies the development direction of machine learning that must be anthropocentric, and expresses some views on machine learning.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230426

    Research on image recognition in human-computer interaction based on convolutional neural networks

    Image recognition is an important research direction in human-computer interaction, which has broad development prospects, aiming at enabling computers to understand and interpret image content. Early image recognition methods mainly rely on hand-designed feature extraction algorithms for image analysis and classification. However, this method has significant limitations, which may not provide accurate recognition results for complex images and is hard to adapt to different application scenarios. With the advancement of deep learning and increased processing power in recent years, Convolutional neural network (CNN) -based image recognition methods have achieved remarkable achievements in human-computer interaction. Therefore, based on the image recognition of CNN in human-computer interaction, this paper first studies the CNN model in depth then describes the application of CNN in human-computer interaction, then enumerates different design methods for comparative analysis, and finally summarizes the advantages and disadvantages of CNN in application and proposes improvements to existing problems. The research shows that image recognition based on CNN is better than the traditional network model and has higher accuracy, but it still has some disadvantages. To address these issues and produce more effective and precise image understanding and interaction, it is required to research and enhance the model's structure and algorithm. The appearance and development of CNN have greatly promoted the development of image recognition technology, which has been widely used in human-computer interaction and has made great breakthroughs.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230428

    Human-computer interaction: Evaluation of trust and emotional

    With the advancement of information technology, human-computer interaction is becoming increasingly intelligent. This progress has led to the transformation of the car cabin from a simple driving space to an intelligent mobile life terminal. At the same time, more and more people have psychological problems in modern society, and timely adjustment of their negative emotions is of great practical significance to social harmony and stability. Traditional methods of emotion regulation require a lot of manpower and energy. Hence, this paper presents an approach that utilizes text dialogue and somatosensory interaction to detect the user’s emotions and subsequently adjust them. By examining and analyzing the current research landscape, the objective of this study is to offer insights for designing trustworthy and emotionally-responsive interactions in the automotive context. It can bring a better driving experience to smart car users and bring benefits to many companies. It is of great scientific and application value to carry out this research in depth to promote the development of human-computer interaction. Through literature review, case study, theoretical analysis and other research methods. This paper sorts out the establishment of trust degree and recognition of emotional changes in human-computer interaction, and puts forward the role, defects and improvements of trust establishment in human-car cooperation, as well as the reasons affecting user emotions, and how to improve the defects of the application of voice emotion recognition. It also discusses difficulties in establishing trust and misperception of emotions.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230429

    Human-computer interaction based on speech recognition

    With the rapid change of The Times, language is no longer limited only in books, but gradually dedicates itself to reality. Speech recognition, in recent years, the cause of artificial intelligence and human-computer interaction continues to develop, all levels of life have its footprint, speech human-computer interaction leaf slowly began to integrate into the mainstream team of artificial intelligence. In general, speech recognition technology brings more convenience and naturalness to human-computer interaction, The benefits of human-computer interaction are not only reflected in improving the performance and efficiency of technical systems, but also in improving the user experience, fostering innovation, and promoting inclusive and sustainable development of society, and it has a positive impact in many fields, and with the continuous progress of technology, its application prospects will be broader. This paper makes a simple analysis and introduction of the role of speech recognition in human-computer interaction, expounds the key technologies, main algorithms and working principles of speech human-computer interaction, And some applications of human-computer interaction based on speech recognition in life and production fields. At the same time, the hidden problems and solutions are discussed, and includes the prospect of future speech human-computer interaction. Hope that this paper can bring some inspiration to relevant scientific research teams, assisting them to broaden a bright future.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230430

    Hand gesture recognition in natural human-computer interaction

    This paper introduces the definition of Gesture recognition. First the article gives a precise definition of gesture recognition and explains the difference between gestures and postures, then it reveals technical difficulties of Gesture recognition. These technical difficulties include four aspects. After analyzing the technology and methods of Gesture recognition and the technical difficulties, it concretely expounds gesture recognition process based on data gloves, which is widely studied before and it also introduces the computer vision based research achievement which is currently becoming a research hotspot in this field. Then it also takes Kinect and HoloLens as examples to introduce specific practical cases of gesture recognition in wearable devices. Also it outlines the application of gesture recognition technology in human-computer interaction which include but not limited to smart terminal, Game control, Robot Control, Clinical and Health, Smart home, Sign Language Recognition, Vehicle system, Interactive entertainment. Finally it reach the conclusion that the biggest challenge researchers meet is to build a powerful framework to overcome the common problems with fewer constraints to provide reliable results. And sometimes researchers need to combine multiple methods for different complex environments.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230431

    Sarcasm detection methods based on machine learning and deep learning

    Sarcasm is an expression of emotion, which often expresses criticism and disgust in a positive tone, and we see it everywhere on social networking platforms. Because of this nature, sarcasm can be challenging to process in Natural Language Processing (NLP) Systems. Failing to detect sarcasm correctly can negatively affect the results of natural language processing tasks such as text correct and sentiment analysis. In recent years, there are more and more studies on sarcasm detection, and the research methods are diverse. According to the survey, the methods based on machine learning (ML) and deep learning (DL) account for the majority. Therefore, this paper mainly summarizes the literature on sarcasm detection based on ML and DL methods in recent years. This paper first introduces the application of sarcasm, then expounds the general architecture of sarcasm detection and makes a distinction between machine learning and deep learning. Then the author investigates the relevant literature and compares the results. Finally, the challenges and research directions in this field are summarized. The purpose of this paper is to facilitate the follow-up research of sarcasm detection.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230432

    Combination of Chinese sentiment analysis datasets based on BiLSTM+Attention model

    By training the Chinese sentiment analysis model, it is found that the prediction accuracy of the model trained by one dataset is obviously low on other datasets. Considering that the existing sentiment analysis work mainly uses a single domain corpus dataset and referring to the existing data processing methods on natural language processing, this paper designs an experiment to combine Chinese datasets from different fields into a large field-imbalanced dataset, and the number of samples from different fields in this dataset is obviously different. The new dataset is used to train a comprehensive Chinese sentiment analysis model and achieves satisfactory training results. According to the results of the experiments, the model trained by the field-imbalanced dataset has high prediction accuracy for samples from various fields, and the prediction accuracy increases with the increase of the proportion of corpus in this field in the training dataset. Through the experiment in this paper, some ideas are provided for the construction of large-scale cross-domain Chinese sentiment analysis datasets in the future.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230434

    Evolution of CNNs in human tracking applications

    Tracking the movement of objects in videos or footage from CCTV systems plays an integral role in crime investigations, surveillance, security predictions, and many other domains. Historically, this task was primarily entrusted to dedicated observers or analysts who would be summoned to review pre-recorded footage post-event. With the advent of machine learning and AI, convolutional neural networks (CNNs) have paved the way for computers to augment human capabilities in analyzing streaming videos or archived recordings. Among the various tracking methodologies that machine learning offers, YOLO (You Only Look Once) and R-CNN (Region-based Convolutional Neural Networks), along with its iterations, stand out as some of the most reliable and precise. However, the scope of analysis often extends beyond these techniques. To enhance accuracy and provide an adept classification mechanism, the Deep SORT (Simple Online and Realtime Tracking) algorithm emerges as pivotal. Its synergy with human detection remains a significant area of discussion and will be deliberated upon in this study. This review aims to elucidate the intricacies of these state-of-the-art methods and their interplay in modern tracking systems.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230436

    Exploring methods to enhance network security through artificial intelligence

    In the dynamic landscape of cybersecurity, traditional rule-based systems find themselves frequently outstripped by the intricacy, variety, and mutable nature of cyber threats. This paper explores the capabilities of Machine Learning (ML) in detecting cyber-attacks, offering a fresh perspective to fortify cyber defense mechanisms. Through its unparalleled strengths in data analysis, pattern discernment, and outcome prediction, Machine Learning emerges as a promising ally in grappling with the multifaceted challenges posed by cyber adversaries. The exploration zeroes in on the potential of utilizing machine learning for cyber-attack detection, spotlighting supervised learning algorithms such as SVM and Random Forest. Experimental findings robustly underscore the value of Machine Learning in identifying potential cyber threats. In conclusion, the transformative potential of Machine Learning in the domain of cyber-attack detection is evident. Equipped with the prowess to derive insights from vast data sets, swiftly adapt to changing parameters, and preemptively recognize threats, Machine Learning promises to redefine the paradigms of cybersecurity. As the digital expanse continues to evolve, defense mechanisms must also evolve, with Machine Learning serving as a pivotal tool in this endeavor.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230437

    Long-term and short-term memory network based movie comment sentiment analysis

    This paper proposes an emotional analysis method of movie reviews based on Long-term and Short-term Memory(LSTM) Network model. Emotional analysis is widely used in movie recommendation system, which can recommend and judge movies by understanding the audience’s emotional response to movies. However, due to the characteristics of movie text and the complexity of emotional expression, traditional methods such as machine learning have limitations and shortcomings in emotional analysis. However, the LSTM model’s better memory is utilized by the method proposed in this paper and the ability to capture the long-term correlation in movie texts, which obviously improves the accuracy and reliability of emotional analysis, and demonstrates the advantages of the LSTM model in emotional analysis compared to the traditional model. Future research can further explore other deep learning models and algorithms, so as to make emotional analysis more accurate and provide users with reliable movie recommendation information.

  • Open Access | Article 2024-02-07 Doi: 10.54254/2755-2721/36/20230438

    Ensuring the security of the Internet of Things: A deep dive into current network weaknesses and approaches for strengthening

    As the Internet of Things (IoT) rapidly takes center stage in today's digital revolution, its integration across various sectors is becoming remarkably ubiquitous. Nevertheless, with the accelerated adoption of IoT, a surge of security concerns has concurrently surfaced. Acknowledging this paradigm, this article delves deep into the underlying architecture of IoT systems, spotlighting their constituent elements and the complex interplay between them. Through meticulous literature reviews and scrutiny of real-world incidents, the most prevalent security vulnerabilities inherent to these systems are pinpointed. For each identified vulnerability, a suite of protective countermeasures is meticulously crafted, underscoring the imperative to preemptively tackle these security gaps. It's paramount to recognize that IoT system security isn't merely a technical requisite but a cornerstone ensuring the system's integrity and resilience. As businesses and individuals increasingly rely on IoT for their day-to-day operations, prioritizing security isn't just a luxury; it's an absolute necessity. Developers and users alike must remain vigilant, consistently updating and fortifying their systems. In doing so, the full potential of IoT can be harnessed while minimizing risks, ensuring that these systems remain both robust and trustworthy in an ever-evolving digital landscape.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated