Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 4th International Conference on Signal Processing and Machine Learning

    Conference Date

    2024-01-15

    Website

    https://www.confspml.org/

    Notes

     

    ISBN

    978-1-83558-335-7 (Print)

    978-1-83558-336-4 (Online)

    Published Date

    2024-03-15

    Editors

    Marwan Omar, Illinois Institute of Technology

Articles

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241076

    Research on the principle, performance, and application of UCB algorithm in multi arm slot machine problems

    As Internet technology continues to evolve, recommender systems have become an integral part of daily life. However, traditional methods are increasingly falling short of meeting evolving user expectations. Utilizing survey data from the MovieLens dataset, a comparative approach was employed to investigate the efficacy, performance, and applicability of the UCB (Upper Confidence Bound) algorithm in addressing the multi-armed bandit problem. The study reveals that the UCB algorithm significantly impacts the cumulative regret value, indicating its robust performance in the multi-armed bandit setting. Furthermore, LinUCB—an enhanced version of the UCB algorithm—exhibits exceptional overall performance. The algorithm's efficiency is not just limited to the regret value but extends to handling high-dimensional feature spaces and delivering personalized recommendations. Unlike traditional UCB algorithms, LinUCB adapts more fluidly to high-dimensional environments by leveraging a linear model to simulate the reward function associated with each arm. This adaptability makes LinUCB particularly effective for complex, feature-rich recommendation scenarios. The performance of the UCB algorithm is also contingent upon parameter selection, making this an important factor to consider in practical implementations. Overall, both UCB and its modified version, LinUCB, present compelling solutions for the challenges faced by modern recommender systems.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241101

    Exploring correlations between economic indicators with natural and societal factors based on linear regression model

    Existing research on the determinants of a nation’s economic development has predominantly centered on individual factors, including energy, land resources, education, taxes, employment, and healthcare. Regrettably, there is a paucity of studies that holistically examine these factors collectively and assess their respective contributions to economic development. Therefore, the primary objective of this study is to investigate the interrelationships between economic indicators and various natural and societal factors. The article firstly uses the Pearson’s correlation coefficient to filter out a portion of the higher degree of correlation from factors that may have an impact on the country’s economic development for further analysis. For the selected factors, using two linear regression models: Ordinary Least Square (OLS) method for preliminary modeling for the extent of affects between each factors and economy; and Fully Modified Ordinary Least Squares (FMOLS) method, as an optimization model, further eliminating the less influential variables. After obtaining the final impact model of the linear correlation, the data is screened based on the variables within the model. A portion of the selected data is used as a training set for training the model and the remaining data is used as a test set for testing the performance. The results of the study show that factors including land area, army size, CO2 emissions, population, minimum wage, would have varying degrees of integrated impact on the economic development of the country.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241113

    An investigation of machine learning-based video compression techniques

    As video technology continues to seamlessly weave itself into the fabric of daily life, there is a growing need for enhanced storage and efficient video transmission. This surge in demand has led to heightened expectations and standards for video compression technology. Machine learning as an up-and-coming technology can play its advantages in the field of video compression. This article reviews the current state of research on combining video compression techniques with machine learning. The article provides an overview of various research avenues for enhancement, spanning from conventional video compression algorithms to the fusion of traditional compression frameworks with machine learning methodologies, and even the development of novel end-to-end compression algorithms. In additional, the article explores the possible various application scenarios of machine learning-based video compression algorithms based on the characteristics of such non-standard and arithmetic demanding algorithms. At the end, the article speculates on the future of video compression algorithms based on the content of the various studies reviewed in the article.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241116

    Comparison of VAE model and diffusion model in lung cancer images generation

    In the rapidly evolving domain of medical imaging, there's an increasing interest in harnessing deep learning models for enhanced diagnosis and prognosis. Among these, the Variational Autoencoder (VAE) and the Diffusion model stand out for their potential in generating synthetic lung cancer images. This research article delves into a comparative analysis of both models, focusing on their application in lung cancer imaging. Drawing from the "Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) lung cancer dataset," the study investigates the efficiency, accuracy, and fidelity of the images generated by each model. The findings suggest that while the VAE model offers faster image generation, its output is notably blurrier than its counterpart. Conversely, the Diffusion model, despite its relatively slower speed, is capable of producing highly detailed synthetic images even with limited epochs. This comprehensive comparison not only highlights the strengths and shortcomings of each model but also lays the groundwork for further refinements and potential clinical implementations. The broader objective is to catalyze advancements in lung cancer diagnosis, ultimately leading to better patient outcomes.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241129

    Navigating the digital currency landscape: A comprehensive examination from blockchain foundations to website security

    This paper offers an exhaustive exploration of the burgeoning digital currency realm, spanning from the foundational tenets of blockchain technology to the evaluation of pivotal website security vulnerabilities. The rise of decentralized cryptocurrencies, anchored in pioneering cryptography and consensus protocols, has deeply transformed traditional financial interactions. However, this transformation brings to the forefront new cybersecurity risks, borne from the intricate nature of these systems. Addressing these imminent challenges, the study introduces a holistic security model, meticulously designed for the Ethereum blockchain environment. This model integrates methods such as smart contract rigorous validation, transaction irregularity spotting, and network assault emulation. Rigorous experiments and simulations vouch for the model’s efficiency in pinpointing security breaches, marking an impressive 85% detection precision and an 81% robustness against uncharted zero-day onslaughts not encountered during model preparation. When juxtaposed with individual security tactics, the model exhibits a dominant stance in terms of attack deterrence, threat spectrum, and system productivity. Yet, the relentless advent of innovative attack strategies in this field means vulnerabilities remain. To bolster applicability in real-world scenarios, delving deeper into forecasting methodologies and broader tests on active systems prove essential. In essence, this multifaceted research initiative illuminates both theoretical and practical pathways to refine the strategic outline for unyielding security measures, championing prudent innovation and oversight in the rapidly evolving cryptocurrency landscape.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241133

    A comparative analysis of blockchain attack classifications

    As blockchain technology has evolved, it has introduced an array of functionalities and mechanisms. However, this advancement has also attracted a growing number of threats specifically targeting blockchains, heightening concerns regarding blockchain security. Although several researchers have attempted to categorize blockchain attacks in their respective studies, there remains a significant disparity among these taxonomies. This paper delves into three distinct classification methodologies, comparing their respective strengths and weaknesses. Additionally, it offers insights into the essential attributes that a comprehensive and effective taxonomy should possess. By breaking down each classification method, the paper provides a clearer understanding of how various researchers approach the challenge of categorizing blockchain threats. This includes looking at the criteria each method uses, such as the level of technical sophistication required for each attack, the potential damage inflicted, or the underlying motivations of the attackers. Furthermore, the paper emphasizes the importance of a universally accepted taxonomy, as this would not only facilitate more effective communication among researchers but also help in devising better defense mechanisms. In conclusion, by analyzing and comparing these classification methodologies, the study hopes to pave the way for a more unified and comprehensive approach to understanding blockchain security threats in the future.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241135

    Generating high-quality images from brain EEG signals

    This study presents DreamDiffusion, an innovative approach to produce high-quality images straight from electroencephalogram (EEG) brain signals, eliminating the need for thought-to-text translation. By harnessing pre-trained text-to-image models, DreamDiffusion integrates temporal masked signal modeling to adeptly pre-train the EEG encoder, ensuring accurate and dependable EEG data representation. Moreover, by integrating the CLIP image encoder, this method fine-tunes the alignment of EEG, text, and image embeddings, even with a scant amount of EEG-image pairs. Effectively navigating the complexities inherent in EEG-based image creation, such as data noise, limited content, and personal variances, DreamDiffusion showcases promising outcomes. Both quantitative and qualitative assessments validate its efficacy, marking a considerable advancement in the realm of efficient, affordable "thought-to-image" conversions, with promising implications in both neuroscience and computer vision.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241146

    Narrative-guided synthesis: Revolutionizing text-to-image translation based on Generative Adversarial Networks

    Synthesizing images from textual descriptions remains an intricate yet essential task in the field of artificial intelligence. However, this process often encounters challenges related to intricacy and time consumption. This study introduces a pioneering approach known as narrative-guided synthesis, harnessing the power of Generative Adversarial Networks (GANs) in conjunction with platforms such as Midjournary. This innovative technique transforms abstract narratives into stunning visual creations, streamlining the image generation process by providing real-time feedback and guidance. This research showcases an optimized framework that integrates diverse modules into a unified system, effectively reducing computational complexity and boosting overall efficiency. Central to this framework is an attention-guided mechanism that emphasizes semantic nuances within the text, ensuring greater fidelity in the generated images. This is complemented by spatially adaptive normalization techniques that maintain contextual relevance within the visual outputs. Preliminary results indicate that this approach not only competes with existing models but potentially surpasses them in producing visually and contextually accurate images, heralding a new era of digital innovation where technology and creativity converge seamlessly. Furthermore, this study underscores the transformative potential of AI in revolutionizing content production, interactive design, and user interfaces, promising a future where textual narratives can be visualized with unprecedented accuracy and creativity.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241191

    Deep Neural Network-based lap time forecasting of Formula 1 Racing

    Making comparisons and analyzing players in the sporting world is extremely valuable. The media, coaching staff, and players all rely on this data to assess performance, develop strategies, and make critical decisions. Therefore, neural networks can be employed to create a practical system that uses previous years’ data to predict future performance. This paper uses a Deep Neural Network (DNN) to predict the fastest lap time in qualifying for Formula 1 (F1) races. The network categorizes data to learn each driver’s performance at each circuit and provides separate predictions. By doing so, it considers the unique characteristics of each driver and track, enabling more accurate predictions. The paper demonstrates that neural networks tend to have better performance and adaptability in such complex environments compared to traditional mathematical methods like linear regression. Neural networks can learn from the data and detect patterns that are difficult to capture with traditional methods. As a result, they can achieve a relatively precise prediction, providing valuable insights and decision-making support for coaches, drivers, and fans.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241198

    The analysis of social E-commerce with artificial intelligence

    Nowadays, with the widespread popularization and development of the internet, the e-commerce industry has begun to rise, among which social e-commerce, as a new community, has become popular on the Internet. At the same time, the field of artificial intelligence is slowly infiltrating into every field of today's society. The diversified data contained in the social e-commerce platform has great potential value, but artificial intelligence, as an important technology of information analysis, is rarely applied in this direction. This paper fundamentally discusses the role of artificial intelligence in e-commerce. Taking Xiaohongshu as an example, the SWOT framework is used to analyze the advantages and drawbacks, potential benefits and risks caused by the application of artificial intelligence in an e-commerce platform with rich user data. The limitation and extensibility of artificial intelligence in e-commerce platforms, finally put forward the application prospect of artificial intelligence in the e-commerce direction. This study recommends that the social e-commerce community establish a robust data privacy protection system, increase investment in technology research and development, and fully leverage the potential of AI technology.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241205

    Comparative analysis of clustering algorithm and improved algorithm application in shipping congestion

    In recent years, due to the lack of reasonable planning of ship operation routes, shipping congestion accidents have been increasing, seriously restricting port development and channel operation, posing a great threat to ship navigation safety and greatly restricting the development prospects of the shipping industry. This paper aims to compare and analyze the application of the clustering algorithm and its improved algorithm in solving port and waterway congestion problems. By analyzing the literature on the use of clustering algorithms to solve shipping congestion problems, and comparing the advantages and disadvantages of multiple clustering algorithms in solving problems, this paper concludes that Partition-based Methods are more suitable for identifying port congestion and the improved Fuzzy DBSCAN algorithm for identifying channel congestion. The research in this paper will help to select clustering algorithms when solving shipping congestion problems in the future.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241206

    Investigating techniques to optimize data movement and reduce memory-related bottlenecks

    In the ever-changing realm of computing, the importance of efficient data movement and the reduction of memory-related bottlenecks cannot be overstated. This research paper delves into a thorough examination of diverse methodologies and approaches aimed at optimizing data transfer and mitigating the constraints imposed by memory limitations. It offers an all-encompassing survey of pertinent literature, delving deep into techniques designed to enhance data movement efficiency, discussing effective strategies for alleviating memory bottlenecks, and presenting the outcomes of extensive experiments conducted. The findings of this study underscore the critical role played by these techniques in augmenting the performance, efficiency, and scalability of contemporary computing systems. In a world where the demand for computational power continues to grow, the ability to streamline data movement and overcome memory constraints is essential. By shedding light on these pivotal aspects of computing, this paper contributes to a more profound understanding of how to harness the full potential of modern computing systems, ultimately paving the way for groundbreaking advancements in the field.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241207

    Navigating the electronic landscape: Exploring FPGAs and MCUs architectures in electronic design

    Field-Programmable Gate Arrays (FPGAs) and Microcontroller Units (MCUs) are foundational pillars in electronic design, each possessing distinct attributes and applications. This paper embarks on an exploration of these components, delving deep into their unique architectures, performance characteristics, and practical uses. This study, through a comprehensive comparative analysis, examines the strengths and limitations inherent to each of these components. It seeks to serve as a guiding compass for technologists, aiding them in making informed decisions when embarking on electronic design projects. FPGAs emerge as the frontrunners in high-computation endeavors, flexing their computational muscles with finesse. Conversely, MCUs establish their dominance in real-time, low-power applications, where their unwavering reliability is paramount. As electronic systems continue to evolve and infiltrate various aspects of our lives, comprehending the intricacies of FPGA and MCU architectures assumes an increasingly pivotal role for engineers, designers, and researchers. Armed with this knowledge, they can confidently navigate the intricate landscape of electronics, strategically harnessing the power of these two fundamental building blocks to bring innovation and efficiency to the forefront.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241230

    Drone detection with radio frequency signals and deep learning models

    The widespread use of drones raises security, environmental, privacy, and ethical issues; therefore, effective detection by drones is important. There are several methods for detecting drones, such as wireless signal detection, photoelectric detection, radar detection, and sound detection. However, these detection methods are not accurate enough to identify drones for use. To solve this question, more robust drone detection method are needed. In addition, for different types of drones and application scenarios, different technical means need to be used for detection and identification. Based on 2-class ,4-class and 10-class problems on an open ratio frequency (RF) signal dataset, we compared the drone detection and classification performances of different machine learning with deep learning models and multi-task models which is proposed by combining different RF methods with Convolutional neural networks (CNNs). Our experimental results show that the XGBoost model achieved the latest results on this groundbreaking dataset, with 99.96% accuracy for 2-class problem, 92.31% accuracy for 4-class problem, and 74.81% accuracy for 10-class problem, which exhibits the best performance for drone detection and classification.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241236

    A comparative study of large-scale and lightweight convolutional neural networks for ImageNet classification

    In the field of convolution neural networks (CNNs), many impressive architectures have been published in recent years. These can be roughly divided into two groups: large-scale models and lightweight models. These large models are characterized by many trainable weights and complex network structures, offering them strong effectiveness in various computer vision tasks, and have become essential components of many modern visual recognition systems. These lightweight CNNs are designed to maintain high performance with limited memory and computational resources. They are highly efficient in terms of inference time and resource utilization, so that particularly suitable for mobile and edge computing devices. This work focuses on some prominent models based on the ImageNet database and explores the reasons for their framework’s success. By analyzing these models, a trend could be identified in the development of CNN models: Reasonably increasing the scale of the model and utilizing suitable frameworks can both improve accuracy and efficiency.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241242

    Neural Style Transfer with automatic style weight searching

    Neural Style Transfer (NST) has garnered significant attention in the field of computer vision. Previous research in this area has made important breakthroughs, creating various style transfer models and innovative architectural designs, and has achieved success in commercial applications. This paper presents a novel design for automatic style weight determination based on constraining the initial total loss within a healthy range and finding the optimal solution through grid search. This design aims to harness the potential of existing NST models by automating the optimization of neural style transfer hyperparameters. The paper first discusses the impact of content weights and loss weights on the generated images and validates the influence of weight ratios on image quality through experimental adjustments of weight proportions. It also confirms that the initial loss essentially defines the optimization space of the optimizer. The paper explores the significance of the initial loss and proposes a method to improve image generation quality by constraining the initial loss range.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241245

    Comparison of machine learning-based book comments sentiment analysis for constructing recommendation system

    The exponential growth in the volume of books available, along with the proliferation of online platforms, has made it increasingly challenging for readers to find books tailored to their interests. This research paper aims to address this challenge by developing an effective book recommendation system based on user reviews and ratings, primarily drawn from Amazon’s dataset covering the period from May 1996 to July 2014. Using a K-Nearest Neighbors (KNN) algorithm and a Random Forest baseline model, the study focuses on comparative analyses in terms of Mean Squared Error (MSE) and computational costs. The KNN model outperformed the baseline model with a lower MSE of 0.15 compared to 0.38 and proved to be computationally less exacting. While the KNN model is currently the more tenable option for deployment, the paper posits that an ensemble approach may offer a more robust solution. Future work aims to include sentiment analysis, explore other recommendation algorithms, and make use of more advanced evaluation metrics. This study provides a foundation for the advancement of book recommendation systems, offering insights into their efficiency and effectiveness.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241246

    A comprehensive research of deep learning approaches for High Definition map construction in autonomous driving

    In the realm of autonomous driving, High Definition Maps (HD maps) are indispensable for safe and precise navigation. Traditional HD map construction methods involving point cloud capture and SLAM have proven effective but labor-intensive. This paper addresses the growing interest in leveraging deep learning techniques to streamline HD map creation. This paper presents a systematic exploration of deep learning methodologies for HD map construction. It categorizes these approaches into two core components: Feature Extraction and Feature Decoding. Feature Extraction involves the transformation of input data, comprising images and LiDAR point clouds, into Bird's Eye View (BEV) representations. Feature Decoding is dissected into rasterized map objectives and vector map objectives. Detailed analysis is conducted on prominent methodologies. The paper provides a nuanced evaluation of these deep learning techniques, highlighting their respective strengths and limitations. Factors such as precision, computational efficiency, and the preservation of fine-grained details are considered when selecting the most suitable method. This comprehensive review summarizes and prospects the research in related fields.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241255

    Advancing signal integrity and application analysis through data-flow mapping

    Data-flow mapping is a crucial method in signal processing and optimization, managing data flow within systems. It’s essential in signal compensation, particularly in telecommunications, audio processing, and biomedical signal processing. Four main algorithm categories underpin data-flow mapping: heuristics, meta-heuristics, Integer Linear Programming (ILP), and Constraint Satisfaction Problems (CSP). Heuristic and meta-heuristic methods like Genetic Algorithms (GA) and Ant Colony Optimization (ACO) provide approximate solutions, crucial for complex problems. ILP and Branch and Bound (B&B) methods offer precise solutions by exhaustive searches under constraints. CSP focuses on satisfying imposed conditions. These methodologies have practical applications, such as signal compensation in communication systems and improving medical imaging like MRI and ultrasound. They’re also integrated with machine learning, quantum computing, and specialized hardware for 5G/6G communications and IoT. Real-time processing and noise reduction advancements enhance consumer audio and diverse sectors. In summary, data-flow mapping and its algorithms drive signal processing innovations across domains, with evolving technology integration ensuring their lasting importance.

  • Open Access | Article 2024-03-15 Doi: 10.54254/2755-2721/47/20241256

    Revolutionizing machine learning: Harnessing hardware accelerators for enhanced AI efficiency

    In recent decades, the field of Artificial Intelligence (AI) has undergone a remarkable evolution, with machine learning emerging as a pivotal subdomain. This transformation has led to increasingly complex algorithms and soaring data volumes, necessitating robust computational resources. Conventional central processing units (CPUs) are struggling to meet the demanding requirements of modern AI applications. In response to this computational challenge, a new generation of hardware accelerators has been developed to enhance the processing and learning capabilities of machine learning systems. Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), and Application Specific Integrated Circuits (ASICs) are among the specialized accelerators that have emerged. These hardware accelerators have proven instrumental in significantly improving the efficiency of machine learning tasks. This paper provides a comprehensive exploration of these hardware accelerators, offering insights into their design, functionality, and applications. Moreover, it examines their role in empowering machine learning processes and discusses their potential impact on the future of AI. By addressing current trends and anticipated challenges, this paper contributes to a deeper understanding of the dynamic landscape of hardware acceleration in the context of machine learning research and development.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated