Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 4th International Conference on Signal Processing and Machine Learning

    Conference Date

    2024-01-15

    Website

    https://www.confspml.org/

    Notes

     

    ISBN

    978-1-83558-345-6 (Print)

    978-1-83558-346-3 (Online)

    Published Date

    2024-03-25

    Editors

    Marwan Omar, Illinois Institute of Technology

Articles

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241097

    AdaGCR: An improved method for optimizing machine learning training process

    In contemporary machine learning, training datasets are typically divided into batches, and models are updated incrementally through batch iterations to save memory and reduce overfitting. However, determining the optimal hyperparameters like learning rate, batch size and number of epochs remains a challenge which often relying on empirical insights. This paper explores a novel method called Adaptive Gradient Conflict Rate (AdaGCR) to optimize the training process. It leverages the idea of gradient conflict rate, which reflects the model’s position within a batch model set and accordingly adjusts the global learning rate. This proposed method is tested by training a Deep Neural Network (DNN) model with MNIST dataset which represents simple tasks and a ResNet-18 model with CIFAR-10 dataset which represents more complicated tasks for solving real world problems. Experiments conducted on DNN demonstrates the proposed method’s effectiveness in reducing overfitting and enhancing convergence, particularly with a well-suited initial learning rate. However, its applicability to more complex models like ResNet-18 may require further refinements, such as layer-specific learning rate adjustments. Future research should focus on fine-tuning AdaGCR and extending its utility across diverse machine learning models and tasks.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241102

    The investigation and prediction for salary trends in the data science industry

    The aim of this study is to utilize machine learning techniques to analyze salary trends within the data science industry spanning the last three years. Initially, this study presented an overview of four machine learning models: Random Forests, eXtreme Gradient Boosting (XGBoost), Neural Networks, and Support Vector Regression (SVR), elucidating their fundamental principles and characteristics. Subsequently, this study gathered, preprocessed, and engaged in feature engineering with salary data from the data science sector over the past three years. These four machine learning models are then employed for salary prediction, and the ensuing model outcomes are meticulously examined. By conducting a comparative analysis and evaluating each model’s performance, their respective strengths and weaknesses were identified. In conclusion, this study summarized the study’s findings and deliberated on potential future research directions. The innovation inherent in this research lies in the application of diverse machine learning models to forecast salaries within the data science industry, coupled with the comprehensive comparison and evaluation of these models. The main conclusion is that XGBoost performs best in salary prediction, while neural networks are more accurate and complex, and SVR has limited applicability. Future research prospects include improving the accuracy and interpretability of models, exploring more features and data processing methods to enhance the accuracy and practicality of salary prediction in the data science industry.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241142

    Stock prediction and analysis based on machine learning algorithms

    The stock market has consistently remained a focal point of substantial concern for investors. Nevertheless, due to the intricate, tumultuous, and often noisy nature of the stock market, forecasting stock trends presents a formidable obstacle. To augment the accuracy of stock trend predictions, the author adopts a combination of the Long Short-Term Memory (LSTM) neural network and a noise reduction technique known as Ensemble Empirical Mode Decomposition (EEMD). This composite model is employed to develop predictions for the daily stock price increases, aiming to provide more precise insights into market behavior. The framework is capable of generating the daily stock price change trend curve based on the training outcomes. EEMD, standardization, and other data preprocessing methods can effectively reduce the noise of the stock market. In this paper, three U.S. stocks from 2010 to 2023 are chosen as the research subjects. After the training is completed, the prediction curve generated by the model closely aligns with the actual curve. Furthermore, three commonly used evaluation metrics were utilized to assess the model’s performance. Based on all those experimental outcomes, this model adeptly forecasts the stock’s trend.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241144

    Exploration of classical neural network architecture in cycleGAN framework with face photo-sketch synthesis

    CycleGAN has been a benchmark in the style transfer field and various extensions with wide applications and excellent performance have been introduced in recent years, however, discussion about its architecture exploration which could enable us to further understand the concept of generative model is scarce. In this paper, several architectures referenced from classical convolutional neural networks are implemented into the generator and discriminator of the cycleGAN model, including AlexNet, DenseNet, GoogLeNet, and ResNet. Their feature extraction modes are imitated and modified into blocks to embed into the encoder part of the generator while the discriminator directly uses their model except it outputs a patch classification. In advance to mitigate the possible imbalance between generator and discriminator ability, a self-adjusting learning rate strategy based on the discriminator confidence is introduced. Multiple evaluation metrics are utilized to measure the performance of each model. Experimental results indicate an AlexNet-like architecture model could achieve a competitive performance than the baseline cycleGAN and present better fine details and high-frequency information.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241160

    Enhancing mask detection performance based on YOLOv5 model optimization and attention mechanisms

    Due to the COVID-19 pandemic, there has been a significant increase in the usage of masks, leading to more complex scenarios for mask detection techniques. This paper focuses on optimizing the performance of mask detection using the You Only Look Once (YOLO) v5 model. In this study, the yolov5 target detection model was employed for training the mask dataset. Diverse model improvement techniques were explored to enhance the model's capability to capture crucial features and differentiate masks from the background in complex scenarios. Finally, the modified model was compared with the earlier original target detection model to identify the most considerable performance gain. The CSPDarknet design with the TensorFlow framework is utilized in this study, and the Attention Mechanism module is implemented through the Keras library. The objective is to optimize the three feature layers between the backbone network and the neck by integrating multiple attention mechanisms. This will enable the model to more quickly and accurately capture important features when dealing with complex scenarios by adjusting the feature map weights. Additionally, in the feature pyramid network, shallow feature maps are fused with deeper feature maps in a certain order to determine the most efficient feature fusion method. Finally, this study identified the optimal combination of attention mechanism and feature fusion through ablation experiments. The results of the experiment demonstrate that the combination of SE block and shallow feature fusion (SE + FF2 model) can greatly enhance category confidence, leading to an improved model performance.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241168

    Translation from sketch to realistic photo based on CycleGAN

    Forensic sketches serve as crucial tools for law enforcement agencies in identifying individuals of interest. However, their effectiveness can be limited due to constraints such as incomplete information and variations in interpretation by sketch artists, often rendering these sketches unrecognizable to the general public. In response to this challenge, this paper introduces an innovative approach—a CycleGAN-based image generation model. This model aims to transform monochrome forensic sketches into images with realistic colors and textures, offering an alternative visual representation that aids the public in identifying wanted individuals. The model is trained on unpaired datasets containing sketches and photographs of human faces, encompassing diverse scenarios. Through this training, it learns to generate images that closely resemble photographs captured in everyday environments. Impressively, the proposed model demonstrates rapid convergence, with both the generator and discriminator reaching optimal performance within just 500 epochs. Consequently, the generated images prove to be significantly more recognizable than the original sketches, thus enhancing the potential for successful identifications.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241169

    An improvement on common optimization methods based on SuperstarGAN

    Image processing has long been a focal point of research, offering avenues to enhance image clarity and transfer image features. Over the past decade, Generative Adversarial Networks (GANs) have played a pivotal role in the field of image conversion. This study delves into the world of GANs, focusing on the SuperstarGAN model and its optimization techniques. SuperstarGAN, an evolution of the well-known StarGAN, excels in multi-domain image-to-image conversion, overcoming limitations and offering versatility. To better understand its optimization, this study explored the effects of different optimizers, such as Adam, SGD, and Nadam, on SuperstarGAN's performance. Using the CelebA Face Dataset with 200 million images and 40 features, this study conducted experiments to compare these optimizers. The results revealed that while SGD and Nadam can achieve comparable results to Adam, they require more iterations and careful tuning, with SGD showing slower convergence. Nadam, with its oscillatory nature, shows promise but requires proper learning rate adjustments. This research sheds light on the critical role of optimizer choice in training SuperstarGAN. Adam emerges as the most efficient and stable option, but further exploration of Nadam's potential is warranted. This study contributes to advancing the understanding of optimization techniques for generative adversarial networks, with implications for high-quality facial image generation and beyond.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241210

    Analysis and prospects of automobile intelligent assisted driving characteristics based on FPGA technology

    This article provides a comprehensive exploration of the pivotal role that field-programmable gate arrays (FPGAs) play in the advancement of autonomous driving technology. FPGAs, which made their debut in the early 1980s, have emerged as a crucial component in this field, owing to their robust parallel processing capabilities, real-time data analysis capabilities, and exceptional customizability. With the ever-increasing demand for autonomous driving solutions, the adoption of FPGAs has become indispensable in meeting the requirements for high-speed data processing and instantaneous response times. Within these pages, we delve into the significant role that FPGAs assume in elevating intelligent driving systems, offering a deep dive into this subject through meticulous case studies and technical insights. This article casts a spotlight on several compelling instances where FPGAs shine, notably in adaptive cruise systems, obstacle recognition, automatic emergency braking, and intelligent parking assistance, achieved through seamless integration with YOLO technology. These real-world examples serve to underscore the pivotal role that FPGAs play in ensuring road safety and propelling technological advancement in the realm of autonomous driving.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241218

    Predicting customer subscriptions to fixed-term deposit products based on machine learning approach

    In the contemporary dynamic financial milieu, financial institutions confront the exigency of comprehending and tailoring services to meet the idiosyncratic demands of individual customers, with a particular emphasis on forecasting fixed-term deposit commitments. The integration of machine learning proffers a robust framework to disentangle the intricacies inherent in customer decision-making processes. This investigation expounds upon a systematic framework encompassing data rectification, validation, and the process of feature curation, underscoring the imperative nature of a scrupulous and methodical approach. The exposition introduces an array of machine learning models, including XGBoost, Logistic Regression, Random Forest, Neural Networks, and Gaussian Naive Bayes, offering elucidation on their respective applications. Noteworthy attention is accorded to the Random Forest and Neural Networks models, with detailed explanations of their operational principles and strengths. The study underscores the criticality of conscientious data preprocessing, featuring a presentation of pertinent Python libraries and methodologies for data refinement, validation, and feature selection. The discourse culminates in a delineation of the potential of neural networks as a potent instrument in the domain of machine learning, affording insight into their intricate architecture and the iterative training process, whilst accentuating their versatility across diverse domains. In summation, this inquiry furnishes a comprehensive and pragmatic compendium on the utilization of machine learning methodologies for the prediction of customer subscriptions within the financial sector.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241223

    Optimizing molecular design through Multi-Armed Bandits and adaptive discretization: A computational benchmark investigation

    In the present benchmark study, a novel strategy is unveiled for the optimization of molecular design by integrating Multi-Armed Bandits (MAB) with cutting-edge adaptive discretization techniques. Central to this approach is the employment of the Ultrafast Shape Recognition (USR) method – a proven technique for assessing molecular similarity. Moreover, the integration of the Zooming Algorithm is noteworthy. This innovative algorithm demonstrates dynamism, adjusting in real-time to adeptly navigate the vast expanse of chemical space. One of the standout revelations from this investigation is the significant influence of a scaling factor. It serves as the fulcrum for striking an optimal balance between computational agility and peak performance. Such insights profoundly challenge the limitations inherent in conventional discrete MAB methodologies, especially when operating within the bounds of finite computational bandwidth. Beyond merely delineating a blueprint for future interdisciplinary endeavors, this research illuminates the intricacies of molecular design optimization. Additionally, it suggests that a marriage between network and cluster analysis could be the key to enhancing and fine-tuning the reinforcement learning journey.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241229

    The evolution and current frontiers of path planning algorithms for mobile robots: A comprehensive review

    The success of robotics heavily relies on path planning, which is a crucial link between computational processes and actual robot actions. This review deeply explores the evolution of path-planning algorithms tracing their development from strategies to advanced and adaptable methodologies powered by artificial intelligence. Moreover, this paper examines techniques like grid-based methods and potential fields, discussing their strengths and inherent limitations. Moving forward, this review delves into the game-changing potential of methods highlighting advancements such as Probabilistic Roadmaps (PRM) and Rapidly-exploring Random Trees (RRT). Furthermore, this paper dissects the impact of intelligence on path planning emphasizing the synergy between machine learning—particularly deep reinforcement learning—and robotic navigation. This review also sheds light on the challenges faced by these algorithms, including real-world implementation hurdles and potential risks associated with reliance on AI-centric approaches. Lastly, this study offers insights into trends. Speculate how emerging technologies like quantum computing may shape next-generation path planning. With its overview, this review aims to be a resource for researchers, academics, and practitioners interested, in exploring the vast realm of robotic path planning.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241240

    Exploration of hyperparameter efficiency for image style transfer

    Image style transfer is a popular computer vision technique that aims to merge the content of one image with the style of another to generate a unique, original image with a different aesthetic feel. Numerous models have been developed for various applications in this field, including portrait painting, art creation, and medical image processing, where additional information or annotations could be added to medical images, making them easier to read and understand. This study focuses on optimizing parameters within the pre-trained Visual Geometry Group (VGG19) network architecture, building on Google Brain’s 2017 work on Arbitrary style transfer in one model. The goal is to improve the quality and realism of the generated images by exploring different parameter combinations and fine-tuning weights and learning rates. This work carefully selects a range of styles and content to compare their effects during the optimization process. Finally, this fine-tuning process strikes a balance between content loss and style loss, which results in high-quality and more realistic images.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241251

    Unveiling the powerhouses of AI: A comprehensive study of GPU, FPGA, and ASIC accelerators

    In the ever-evolving realm of technology, Artificial Intelligence (AI) has ushered in a transformative era, reshaping our interactions with digital systems, and expanding the horizons of machine capabilities. At the core of this AI revolution are specialized hardware entities known as AI accelerators. These accelerators, including Graphics Processing Units (GPUs), Field-Programmable Gate Arrays (FPGAs), and Application-Specific Integrated Circuits (ASICs), play a pivotal role in advancing AI applications across diverse domains. This paper delves into these accelerators, offering an in-depth exploration of their unique attributes and application domains. GPUs, initially designed for graphics, have evolved into versatile tools, thanks to their parallel computing prowess and efficient memory utilization. FPGAs, with reconfigurability and low latency, prove valuable in aerospace and neural network implementations, though they come with cost and expertise challenges. ASICs, engineered for specific functions, excel in performance and power efficiency for mass production but require significant time and resources for development. Furthermore, this paper presents practical application analyses, showcasing how these accelerators are effectively deployed in real-world scenarios. With this comprehensive exploration, readers gain a deeper understanding of AI accelerators and their transformative impact on the AI landscape.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241280

    3D special effects modelling based on computer graphics technology

    Cinema has radically transformed due to 3D special effects modelling, a product of advancing computer graphics technology. This field has significantly impacted entertainment and technology, and is now extending its reach beyond. A basic understanding of 3D modelling is crucial, obtainable through academic resources that provide historical context and insights. The journey begins with the emergence of computer-generated imagery (CGI) in the mid-20th century, pioneered by innovators like Ivan Sutherland. We progress to the 1980s, where film and gaming intersected, resulting in landmark films like "Tron". These movies showcased the storytelling potential of CGI. The 1990s further demonstrated CGI's capacity to blend with live-action footage, with films like "Jurassic Park". By the 2000s, filmmakers fully harnessed 3D modelling, creating immersive cinematic experiences like "The Lord of the Rings". The democratization of 3D modelling and its integration with virtual production and augmented reality have since expanded its applications. We conclude by looking forward, where emerging technologies promise to further integrate 3D modelling into our lives, reshaping fields like gaming and navigation. This essay will explore the evolution and impact of 3D special effects modelling.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241285

    From MOSFET to FinFET to GAAFET: The evolution, challenges, and future prospects

    With the swift progression of semiconductor technology, the transition from Metal-Oxide-Semiconductor Field-Effect Transistors (MOSFETs) to Fin Field-Effect Transistors (FinFETs) and further to Gate-All-Around Field-Effect Transistors (GAAFETs) presents significant potential for the future of electronic devices and systems. This article delves into the intricate applications, challenges, and prospective evolutions associated with FinFET and GAAFET technologies. Findings suggest that these technologies are particularly apt for low-power logic systems, high-performance computing, and artificial intelligence domains. However, as dimensions shrink, challenges pertaining to heat dissipation, leakage, and manufacturing consistency become prominent. Despite these hurdles, the horizon for semiconductor technology remains bright, encompassing exploration of alternative materials such as Germanium and 2D compositions and innovative designs like U-shaped Field-Effect Transistors and Complementary Field-Effect Transistors. As the industry continues its relentless pursuit of even more efficient, smaller transistors, the exploration of alternative materials and diversification in architecture may play a pivotal role in future developments. In essence, while the semiconductor sphere confronts challenges, relentless innovation promises a future brimming with even more efficient and compact transistor technologies.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241287

    Based on social network analysis: Whether international trade network changed before and after the outbreak of the COVID-19 pandemic?

    Since the outbreak of COVID-19 pandemic, the international trade has been facing a severe impact. Based on the social network analysis method, this research models the world trade network during the six years from 2017 to 2022. The world trade network is made into a weighted directed adjacency matrix based on the data published by UN Comtrade database. The matrix also refers to the gravity model. With visualized data and images, this research makes a quantitative study on the impact of the COVID-19 pandemic on the world trade network from the perspective of network analysis, centrality analysis and clustering coefficient. Apart from that, this research focuses on the international trade pattern and how it changes under the impact of the pandemic. According to this research, although the world trade network has not been deeply affected by the disease, there are some concealed tendency has been discovered, which may profoundly change the world pattern in the future.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241326

    Sentiment analysis with adaptive multi-head attention in Transformer

    We propose a novel framework based on the attention mechanism to identify the sentiment of a movie review document. Previous efforts on deep neural networks with attention mechanisms focus on encoder and decoder with fixed numbers of multi-head attention. Therefore, we need a mechanism to stop the attention process automatically if no more useful information can be read from the memory.In this paper, we propose an adaptive multi-head attention architecture (AdaptAttn) which varies the number of attention heads based on length of sentences. AdaptAttn has a data preprocessing step where each document is classified into any one of the three bins small, medium or large based on length of the sentence. The document classified as small goes through two heads in each layer, the medium group passes four heads and the large group is processed by eight heads. We examine the merit of our model on the Stanford large movie review dataset. The experimental results show that the F1 score from our model is on par with the baseline model.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241348

    Analysis of the implementation for big data analysis in sales prediction: LSTM, ANN and DNN

    As a matter of fact, big data analysis is a choice made by the sales forecasting industry in response to the trend of the times, especially with the rapid development of computation ability and machine learning models. In general, it consists of two parts, i.e., big data and machine learning. To be specific, machine learning has received widespread attention in this field after being supported by big data. On this basis, this study will select the implementation of three machine learning models, LSTM, ANN and DNN. The prediction accuracy of these three models all meets practical requirements, and their performance in complex data is better than traditional models, but their costs are still not affordable for most companies. The purpose of this article is to help readers understand the development of big data analysis in the sales forecast industry, its current advantages and disadvantages, as well as possible future development directions.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241379

    The application of deep learning in autonomous driving

    Autonomous driving technology is currently a globally prominent subject, with its relevance and impact readily apparent. Leveraging advanced sensors, sophisticated algorithms, and state-of-the-art computer vision techniques, autonomous vehicles can autonomously navigate, mitigate traffic accidents, and alleviate urban congestion. Furthermore, this technology is poised to accelerate innovation within the automotive sector, drive industrial advancement, and enhance people's convenience and safety in their travel experiences. The importance of autonomous driving technology is not only reflected in its own advantages, but also in its ability to lead the development direction of intelligent transportation in the future, helping to promote the development of intelligent transportation systems, realize the intelligent interconnection between vehicles and vehicles, vehicles and road infrastructure, and further enhance the safety and convenience of traffic travel. Therefore, autonomous driving technology is a significant innovation that will bring a better and more convenient future for mankind. In this study, a method based on sensor internal and external parameter calibration transformation matrix, road target detection algorithm, automatic vehicle detection method, pedestrian intention prediction technology and automatic pedestrian recognition system are analyzed, which are applied to object detection and obstacle avoidance. This article provides a good overview of the field of autonomous driving.

  • Open Access | Article 2024-03-25 Doi: 10.54254/2755-2721/50/20241389

    Financial assets prediction based on ARIMA, Random Forest and GRU

    As a matter of fact, financial asset prediction is a domain of great interest because of its potential to generate revenue. In reality, financial asset prediction models have evolved from basic time series analysis models to contemporarily hybrid models with the help of machine learning algorithms. To be specific, this study will introduce and analyse three popular financial asset forecasting models and their hybrids in terms of the properties they have demonstrated in the completed studies. In reality, good results have been achieved using a time series analysis method named autoregressive integrated moving average (ARIMA) for capturing linear elements. According to the analysis, in dealing with data noise and interpretability, Random Forest (RF) algorithm, a machine learning technique, produced positive outcomes. Deep learning technique gate recurrent unit (GRU) produced positive outcomes in terms of prediction accuracy. Based on the evaluations, this study indicates future research directions in the field of financial asset forecasting by analysing and organizing the characteristics of three different mainstream models.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated