Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 4th International Conference on Signal Processing and Machine Learning

    Conference Date






    978-1-83558-353-1 (Print)

    978-1-83558-354-8 (Online)

    Published Date



    Marwan Omar, Illinois Institute of Technology


  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241104

    The investigation of performance improvement based on original GANs model

    The rapid advancement of technology has led individuals to place an increasing amount of reliance on using Artificial Intelligence (AI) to deal with laborious responsibilities. Generative adversarial networks (GANs) . This study will investigate possible approaches to enhance the performance of GANs models during training. Additionally, a Generator and a Discriminator are formed and coupled together. Their respective learning rates are adjusted to 0.0002, and epochs are set to 400, the batch number set to 128 for the duration of the experiment. In the final step, the simple GANs model is reimplemented by combining all of the components discussed thus far. The primary approach to the experiments revolved around tuning different parameters in the models, changing the original loss function, and then observing the training process of each model. The first is to increase the number of training cycles to 1000 epochs without modifying the model structure to better observe training. Second, this study raised the epochs to 5000, modify the batch number to 512, and assess the model’s performance at three learning rates: 0.0001, 0.0002, and 0.0003. Finally, generator learning rate is set to 0.0007, discriminator to 0.0003, and original model’s Binary cross-entropy loss function is changed to Wasserstein function. After three experiments, conclusions are formed. Changing the initial function to Wasserstein Earth Mover distance and increasing the discriminator and generator learning rates to 0.0003 and 0.0007 improves GANs best. These adjustments cause the GANs model’s two Losses to slowly reduce began to stabilize toward 0 at 2000 epochs.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241121

    An investigation into deep learning for the analysis of medical images

    In recent years, deep learning has emerged as a pivotal paradigm in the analysis of medical images, with convolutional networks serving as a cornerstone of this advancement. This paper delves into a comprehensive exploration of the fundamental principles underpinning deep learning and its applications within the domain of medical image analysis. Through a meticulous review of many contemporary contributions, this study synthesizes the latest developments in the field, emphasizing tasks like image classification, object detection, segmentation, and registration. The inquiry spans diverse medical disciplines, encompassing neurology, retinal imaging, pulmonary studies, digital pathology, breast and cardiac evaluations, and musculoskeletal analyses. As a culmination, the paper not only assesses the present state-of-the-art achievements but also critically discusses persistent challenges and illuminates promising avenues for future research endeavors.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241137

    Federated learning-based YOLOv8 for face detection

    Recognizing the paramount importance of face detection in the realm of computer vision, there is an urgent need to address the vital concern of protecting individuals' privacy. Face detection inherently involves the handling of extremely sensitive personal information. To tackle this challenge, this study puts forth a proposal to incorporate Federated Learning into the face detection model. The objective is to maintain data localization and enhance security throughout the experiments by harnessing the decentralized nature of collaborative learning. The experimental procedure for Federated learning in face recognition models encompasses several key steps: device selection, global model initialization, model distribution to devices, local training, local model updates, model aggregation, global model updates, and multiple iterations. This methodology enables the collective training of models by dispersed devices, hence enhancing recognition performance, all the while ensuring the preservation of user data privacy. In addition, it is imperative to integrate Federated learning with YOLOv8 in order to establish a distributed target detection system. This method entails numerous devices engaging in local YOLOv8 model training, hence safeguarding data privacy and minimising data transmission. The empirical findings indicate that the use of joint learning in the face detection model leads to successful identification of the face model. In the future, there will be a consideration of novel federated learning algorithms with the aim of enhancing privacy.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241140

    Tackling the cold start issue in movie recommendations with a refined epsilon-greedy approach

    With the rapid growth of the Internet and the consequent surge in data, the current era is characterized by information overload. As the domain of data processing and storage expands, recommendation systems have become pivotal tools in navigating this deluge, assisting users in filtering through vast information landscapes. A notable segment of this is movie recommendation systems. As living standards rise, so does the demand for cinematic experiences. Enhancing and refining the methodologies of these recommendation systems is, therefore, of significant value. However, a consistent challenge is the ‘cold start’ problem encountered when new users join. Without prior viewing records or preferences, these users pose a dilemma for the system: how to offer relevant recommendations without historical data? Addressing this challenge, this paper proposes a unique method grounded in the N-armed bandit model, introducing an enhanced Epsilon-greedy algorithm specifically designed for movie recommendations for such users. By adjusting dynamically based on real-time user feedback, the algorithm aims to continuously hone its recommendation quality, ensuring a consistently better user experience.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241143

    Face age progression and regression based on various types of GANS

    Recently, there has been a surge of interest among scientists in the application of face age progression and regression, spanning various fields such as criminal investigation and archaeology. Simultaneously, the computer world has been buzzing with excitement over Generative Adversarial Networks (GANs), thanks to their remarkable efficiency and adaptability. Within this context, researchers have successfully harnessed the power of GANs to develop methods for face age progression and regression. Each of these approaches boasts its unique model and architecture, equipping them with distinct sets of limitations and advantages. This article provides a comprehensive review of the methods of implementing face age progression and regression by GANs. To be specific, this paper mainly talks about Wavelet-based GANs and Identity-Preserved cGANs. For each method, the author introduces its basic idea and explains its framework and special parts in detail. The outputs of each model and their characteristics and limitations are also concluded in the discussion. Besides, this paper also describes two real-life applications of this technology, including finding lost children and predicting results after cosmetic surgeries. The introduction of the practical applications provides possible directions for researchers to combine different types of GANS with face age progression and regression in the near future.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241225

    Application, investigation and prediction of ChatGpt/GPT-4 for clinical cases in medical field

    The integration of Artificial Intelligence (AI) and medical treatment not only makes the clinical diagnosis more accurate, but also makes the patient's rehabilitation more systematic and professional, especially after the advent of the Large Language Model(LLM) in the past 2 years. This paper discusses 3 clinical cases of 2 kinds of LLMs: ChatGPT (GPT-3.5) and GPT-4 in Physical Medicine and Rehabilitation (PM&R), and shows their powerful analytical reasoning ability. In the first experiment, ChatGPT and the leading professional doctors in the industry were asked to classify the emergency records of ophthalmology during the 10-year period, infer the severity of each patient's illness and determine the nursing requirements. In the later experiment, GPT-4, an upgraded version of GPT-3.5, delayed the diagnosis of medical history data of patients aged 65 and over, to study the clinical diagnosis opinions and systematic treatment scheme of GPT-4 as a "professional doctor". ChatGPT and GPT-4 participated in the examination with 12 categories of neurosurgery medical fields, which was shown in the last experiment, aiming at studying their medical professional level and discussing their clinical reliability and effectiveness, as well as LLMs' ability of reasoning questions step by step. The experimental results show that these 2 kinds of large language models have professional and powerful ability to analyze actual projects, and their performance even far exceeds that of professional clinicians. At the same time, the existing defects of the models and their more applications in the medical field in the future are prospected.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241227

    Research on the queuing theory in practical applications

    In the face of mounting global urbanization and digitization trends, the need for advanced tools for city planners and system managers becomes increasingly paramount to ensure seamless infrastructure operations. Among the arsenal of available tools, queuing theory emerges as a standout, offering invaluable predictions and strategies for a broad spectrum of situations. This article delves into the nuanced applications of queuing theory, with a specific lens on network communications and urban space planning. Drawing from a rich tapestry of academic sources, the narrative weaves together core principles to shape models that mirror real-world situations. At the heart of this exploration lies a deep dive into solutions that tackle network delay challenges, fine-tuning techniques for 6TiSCH resource allocation, and the subtle art of queue design at railway ticket counters. These instances highlight the adaptability and immediate relevance of queuing theory across various sectors. Those in the fields of design, system architecture, and urban planning will find this read enlightening. By leveraging the insights offered, decision-makers can pave the way for optimized system functionalities and heightened user experiences, vital in an era dominated by urban sprawl and digital transformation.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241259

    Exploring the significance and applications of Field Programmable Gate Arrays in modern integrated circuits

    In recent years, the rapid advancement of electronic technology and large-scale integrated circuit technology has led to increased integration of integrated circuits and expanding system scales on integrated motherboards. This evolution presents new challenges and demands in system design. Field Programmable Gate Arrays (FPGAs), a vital VLSI technology, have found extensive applications in communication, image processing, computers, and other domains, becoming a pivotal component of contemporary electronic systems. This paper aims to enhance the understanding of FPGAs by first delving into their theoretical foundations, followed by an overview of their general structural elements and a historical perspective on FPGA development. Additionally, this analyse and summarize relevant literature in the third section, focusing on the three primary application areas of FPGAs, elucidating their research processes and findings. Finally, the fourth section offers insights into the future directions of FPGA development, grounded in the current context. By acquainting readers with FPGA’s essential attributes and its multifaceted applications, this paper underscores the pivotal role of FPGAs in the landscape of modern integrated circuits.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241261

    The efficient application analysis of FPGA in automotive intelligent control

    In recent times, the automotive industry has witnessed a remarkable transformation with the rapid development of automotive intelligent control systems. This evolution has shifted consumer expectations from cars being mere modes of transportation to multifunctional lifestyle assistants. A pivotal player in this transformative journey is Field-Programmable Gate Arrays (FPGAs), which have made significant contributions by delivering high-performance and efficiency enhancements across various facets of automotive intelligent control. This article delves into the diverse applications of FPGA technology within the realm of automotive intelligent control, classifying them into three distinct categories. Firstly, it explores FPGA applications in autonomous driving image processing, highlighting their role in enabling real-time image analysis and recognition, a critical component of self-driving vehicles. Secondly, the paper examines FPGA applications in automotive control function implementation, showcasing how FPGAs facilitate the efficient execution of complex control algorithms and decision-making processes in modern automobiles. Lastly, it investigates FPGA applications in automotive electronic design, emphasizing their role in enhancing the overall reliability and performance of electronic systems in vehicles.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241262

    Principles, applications, and challenges of digital predistortion technology

    A power amplifier plays a crucial role in the transmitter end of wireless communication systems. Despite enhancing operational efficiency, it inadvertently introduces nonlinear distortion, leading to signal degradation and spectral regrowth. This occurrence necessitates the incorporation of additional linearization techniques in transmitter terminals to optimize both efficiency and linearity concurrently. Among these, Digital Pre-Distortion (DPD) technology stands out as a well-researched and extensively employed strategy for alleviating the nonlinear distortion induced by power amplifiers. This paper embarks on a comprehensive exploration of DPD technology, offering insight into its evolutionary path and elaborating on its foundational principles and essential techniques. The discussion extends to elucidate the significant technical challenges and the burgeoning trends within the DPD technology landscape. The narrative underscores the significance of DPD in enhancing the performance and efficiency of wireless communication systems, particularly in the context of burgeoning technological advancements and escalating demands for superior communication quality and broader bandwidth. Through a meticulous examination of the DPD technology paradigm, this paper contributes to the ongoing discourse and research, shedding light on prospective developmental avenues and potential enhancements that could further augment the efficacy and reliability of DPD technology in contemporary communication systems.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241390

    Analysis of ARIMA, LightGBM, XGBoost, and LSTM models for stock prediction

    As a matter of fact, stock market prediction has always been a popular problem. Many investors and scholars think that it’s impossible because they believe it’s random and doesn’t have any patterns. However, many studies have found that long term prediction is possible, and that the existence of patterns in stocks makes it able to be predicted. Because of the possibility of predicting the stock market, many studies and investors have thought of new methodologies, ranging from statistical, economical, and others, employing techniques from a variety of practices. One methodology that has recently gained momentum is machine learning, which shows great promise and improvement. This study looks at four prevalent stock market prediction models, which include ARIMA, LightGBM, XGBoost, and LSTMs, explains some research done with them, the problems they have, and future improvements. It finally briefly discusses other methods researchers have used to predict the stock market that weren’t explained in the paper.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241391

    Evaluations of the machine learning schemes for cryptocurrency prediction

    As a matter of fact, stock market prediction remains a challenging and crucial aspect of investment decision-making. Contemporarily, cryptocurrencies are one of the hottest underlying assets on account of its high volatility. In this case, based on high accuracy prediction, it is available to achieve large extra return from the crypto markets. With this in mind, this study investigates the usage of machine learning algorithms, including LSTM, GRU, as well as bi-LSTM, for predicting cryptocurrency prices, focusing on Bitcoin (BTC), Ethereum (ETH), as well as Litecoin (LTC). According to the analysis, the study reveals that the GRU model consistently outperforms other algorithms, MAPE and RMSE values across all three cryptocurrencies. These findings underscore the reliability and efficiency of the GRU model in cryptocurrency price prediction. Furthermore, the research compares the model's performance with previous studies, reaffirming its effectiveness and potential for practical application in investment strategies as well as decision-making.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241395

    Stock price prediction for Google based on LSTM model with sentiment analysis

    Data analytics is increasingly widely used in economic and financial fields, with one of the more important applications being the prediction of stock price changes. However, the prediction of stock price changes is challenging because stock price changes are often uncertain and affected by multiple factors. This study is designed to use the LSTM model to predict stock price changes, and in the construction of the model to consider the psychological and emotional changes of investors, adding a sentiment analysis, combined with the sentiment index obtained from the sentiment analysis and the original stock price data as the input data for the prediction model. During the experiment, a comparison experiment was set up, i.e., only using the basic LSTM prediction model to predict stock price changes and the improved LSTM prediction model with the sentiment index obtained from the added sentiment analysis to predict stock price changes. After the comparison, the prediction results obtained by the LSTM model with the addition of sentiment analysis are more accurate, which on the one hand indicates that the change of investors' psychological sentiment will have an impact on the stock price change, and indicates that the prediction results obtained by the prediction model that considers the change of investors' sentiment are more accurate. The improved LSTM prediction model can help investors to effectively avoid possible risks when investing in stocks and thus gain more profit.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241407

    CO2 emissions prediction based on regression, neural network and SVM

    As a matter of fact, with the fast-pace development of global economics and technology, the natural environment is suffering from great amount of greenhouse gases emissions, which attract a lot of attentions from researchers. Specifically, in statistics and data science, experts believe that making accurate CO2 emissions prediction could help governments make policies accordingly. In this paper, three different machine learning models (regression, neural network and support vector machine) are analysed in terms of their construction process and performance on CO2 emissions prediction. Besides, some practical applications from these studies are shown. In general, based on the analysis, these models have made great achievement on CO2 emissions prediction and they all solve the issue in various perspectives. Therefore, this study will show the effectivity of machine learning models on CO2 emissions prediction and encourage more scientists from different majors to take part in it. Overall, these results shed light on guiding further exploration of carbon emission prediction.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241411

    Subscribing prediction of term deposit based on decision tree, random forest and support vector machine

    To classify customers and predict their behaviour based on some of their features and what they do before is most people want to do. This study finds a data set about bank customers from Kaggle and use three different classification models to classify customers and predict whether they will subscribe a term deposit based on some of their features. The three classification models are decision tree model, random forest model and support vector machine model. Firstly, using these models to get the feature importance and accuracy rate to evaluate the result. In addition, this study changes the parameters about these models and find how can get better result. Based on these models to process data set and getting the result, this paper also compared their results and find advantages and disadvantages of three models. Finally, this paper also discusses how to improve the models so that they can get the better result and solve some other problems.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241420

    Overview of potential customer prediction for products based on machine learning

    With the development of the Internet, e-commerce, as a new way of online shopping, has provided great convenience to people's lives. Due to the complex potential relationship between consumers and products, it is difficult to recommend products according to consumers' needs, which increases the difficulty of online shopping. Therefore, how to predict the potential consumers of commodities has attracted the wide attention of researchers. Fortunately, machine learning-based approaches have made text and images that can model complex underlying relationships between data. Therefore, researchers have introduced machine learning into the product potential consumer prediction field and achieved good results. This paper first introduces the relevant data set and the product potential users' forecast evaluation index. Then, it summarizes the relevant product potential consumer prediction methods based on machine learning. Finally, the paper summarizes the whole article and looks forward to future research methods.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241421

    A comparative study of the deep learning based model and the conventional machine learning based models in human activity recognition

    Human activity recognition (HAR) has been widely studied as a research field in human behavior analysis due to its huge potential in various application domains such as health care and behavioral science. Recently, deep learning (DL) based methods have also been successfully applied to predict various human activities. This research aims at building different Python-based models to perform HAR using smartphones and calculating and comparing the accuracy of the models to select the optimal one. Four models were built to classify and predict human activities: Deep Convolutional Neural Network (DCNN), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF). The results of the experiments in this paper show that the Deep Convolutional Neural Network achieves an average recognition accuracy rate of 95.49%, exceeding the other three models. The underlying reason may be that Deep Convolutional Neural Network is based on a more advanced algorithm — deep learning technique.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241432

    Classification of comments on social media based on long short-term memory

    Social media has assumed a pivotal role in contemporary society, significantly enhancing the convenience of daily lives. Nonetheless, the prevalence of toxic comments on social media platforms has led to varying degrees of harm for individuals. The conventional practice of manually categorizing and blocking such toxic comments has proven to be highly inefficient. To address this issue, this study employs artificial intelligence natural language processing technology to classify social media comments, offering a more effective solution. In the past few years, many algorithms for handling text classification tasks have been introduced and applied in various scenarios. In this work, the author used an LSTM model that can effectively handle long sequence dependency problems to implement text classification. This study achieved an accuracy of 99.4% after training on the Kaggle toxic comments datasets. During the training process, the training accuracy is greater than the validation accuracy while the validation loss is lower than the training loss. After training, the trained model can accurately predict an input sentence and the results are within the expected range.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241434

    Sentiment analysis based on machine learning models

    Sentiment analysis represents a pivotal research domain within the realm of natural language processing (NLP). Its significance lies in its capacity to scrutinize vast volumes of data originating from social networks and to offer invaluable insights. While numerous studies center on the exploration and enhancement of diverse models and techniques for sentiment analysis tasks, there is a scarcity of research dedicated to evaluating and contrasting the performance of these models. This paper undertakes an investigation to assess the efficacy of four distinct machine learning models: k-nearest neighbor (KNN), random forest, multinomial naive Bayes, and logistic regression, with the aim of shedding light on their relative effectiveness. The data in this research comes from two datasets, SST-2 and IMDB. Data from SST-2 is used for training and testing, and data from IMDB is used for further testing. The term frequency-inverse document frequency (TF-IDF) feature extraction method is integrated with the models and applied to the datasets. Results show that all the four models do well on SST-2 dataset, but KNN and random forest model perform poorly on IMDB dataset.

  • Open Access | Article 2024-03-29 Doi: 10.54254/2755-2721/54/20241447

    The impact of Q-learning parameters on robot path planning problems in different complex environment

    With the development of reinforcement learning algorithms, its efficient problem-solving model and the universality of the method have been favoured by many scholars. More and more robot path planning problems are solved using reinforcement learning methods. This article focuses on the Q-learning algorithm, uses MATLAB to study the impact of Q-learning parameters on robot path planning problems in different complex environments, and tries to find the optimal solution to the parameters. Research has found that environmental complexity has a significant impact on the speed at which robots solve path planning problems, and the optimal solutions to problem parameters in different environments require detailed analysis of specific problems.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated