Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 4th International Conference on Signal Processing and Machine Learning

    Conference Date

    2024-01-15

    Website

    https://www.confspml.org/

    Notes

     

    ISBN

    978-1-83558-343-2 (Print)

    978-1-83558-344-9 (Online)

    Published Date

    2024-03-22

    Editors

    Marwan Omar, Illinois Institute of Technology

Articles

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241045

    Alzheimer's disease intelligent detection combining XGBOOST and NARX

    Due to the current situation of mental health illness, which causing a huge impact on the society. In this paper, an attempt has been made to analyses and predict the data from ANDI using single and composite algorithms. This paper used chi-square test, Spearman’s correlation coefficient and maximum mutual information number, cost-sensitive learning, SMOTE, ADASYN, SMOTE+ENN, SMOTE+TOMEK to investigations. Specifically, this paper adopted the random forest to fill the data, and besides, given the fact that the data shows the characteristics of imbalance, this paper identifies the method of SMOTE TOMEK integrated sampling, XGBOOST and Bayesian optimization scheme to give the best performance and the best classification was obtained by XGBOOST combined with SMOTE-TOMEK. Furthermore, this paper used the NARX network to track the changes generated by time-based indicators, providing another insight to refine the study of intelligent diagnosis of Alzheimer.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241046

    Comparison of K-Means, K-Medoids and K-Means++ algorithms based on the Calinski-Harabasz index for COVID-19 epidemic in China

    The novel coronavirus spreads from person to person through close contact and respiratory droplets such as coughing or sneezing. Various studies have been conducted globally to deal with COVID-19. However, no cure for the virus has been found , and efficient data processing methods for sudden outbreaks have not yet been identified. This study compares three algorithms for data sets to analyze clustering patterns to determine the best data processing method. The data of this study comes from the Chinese Center for Disease Control and Prevention, including two attributes of confirmed cases and death cases. We selected the data from the initial stage of the outbreak until October 31, 2021. We compared the data analysis and processing results of the clustering of the spread of the new coronavirus in China by the K-Means, K-Medoids and K-Means++ algorithms. By comparing the Calinski-Harabasz index values from K=2 to K=10, the results show that the K-Means, K-Medoids and K-Means++ algorithms have almost the same clustering effect when K does not exceed 6, but when the K value is greater than 6. When the K-Medoids clustering effect is significantly better, therefore, from the three clustering algorithms used, it can be concluded that the best method for clustering the spread of the novel coronavirus outbreak in China is the K-Medoids method. The results of this study provides ideas for future researchers to choose an appropriate cluster analysis method to effectively process the data in the early stages of the epidemic.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241052

    Signal detection algorithms for massive MIMO system

    As 5G communication networks are maturing, we have higher and higher requirements for the detection of communication signals. In this paper, for the Massive MIMO system signal detection problem, we mainly summarize the detection algorithms that can be used to replace the traditional ZF and MMSE, so as to avoid large-scale matrix inverse and reduce the computational complexity. It mainly includes the general iterative method, typically represented by SSOR, which makes the transmit signal matrix constantly close to the ideal value by iterating; the other is the level expansion class solution method, which takes the order expansion of the level as the initial value of the iteration to accelerate the convergence rate of the algorithm, typically represented by the MLI algorithm. However, today where the demand for communication is gradually increasing and the number of users is constantly getting larger, the performance of the above algorithms may degrade seriously, so the AI signal detection algorithm is a good alternative, which learns autonomously through deep neural networks, including model-driven and data-driven schemes.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241054

    On the use of PID control to improve the stability of the quad-rotor UAV

    The recent development of drones has aroused great concern at home and abroad. For civil industry and agricultural plant protection industry, it can provide safer and more stable flight and improve production efficiency. For the exploration industry, the application of UAV (Unmanned Aerial Vehicle) will reduce unnecessary casualties. In places where damage is serious, UAV will bring accurate on-site information and data more quickly. It can make drones to replace people in high-risk operations, scientific and technological agricultural development and survey more stable and safe; For the military industry, it can be applied to different combat environments to increase its stability. In recent years, although UAV combat and reconnaissance are not the mainstream combat methods, UAV has become or an indispensable part. Improving the stability of UAV will also increase the military combat capability, so UAV has good development prospects and economic benefits. Nowadays, PID (Proportion Integration Differentiation) control plays an indispensable role in quad-rotor UAV. How to use different types of PID control combined with existing intelligent technology to improve the stability of quad-rotor UAV is a hot issue in recent years. This paper summarizes the previous research results in this field, briefly describes the motion mechanism of four-rotor UAV, wind disturbance resistance mechanism, the action principle and contract between traditional PID control and fuzzy PID control, as well as the effect of intelligent algorithm on PID control addition, and puts forward suggestions for PID control algorithm of the future UAV.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241056

    GravNet: A novel deep learning model with nonlinear filter for gravitational wave detection

    Gravitational waves (GW) detected by LIGO, VIRGO, and upcoming facilities have ushered in a transformative era for astronomy and physics. However, these cosmic ripples present unique challenges. Most GW signals are not only weak but also fleeting, lasting mere seconds. This poses a significant hurdle to current search strategies. The prevalent matched filtering technique, while effective, demands an exhaustive search through a template bank, slowing down data processing.To overcome these limitations, machine learning, particularly Convolutional Neural Networks (CNNs), has emerged as a solution. Recent studies demonstrate that CNNs surpass traditional matched-filtering methods in detecting weak GW signals, extending beyond the training set parameters. Nonetheless, optimizing these deep learning models and assessing their robustness in GW signal detection remains essential. In this study, we explore various methods to enhance CNN models’ effectiveness using simulated data from three gravitational wave interferometers. Our investigation spans denoising techniques, CNN architectures, and pretrained AI models. Notably, the Constant-Q transform (CQT) outperforms the Fast Fourier transform in denoising raw gravitational signals. Furthermore, employing the pretrained model EfficientNet enhances GW detection efficiency. Our proposed CNN model, GravNet, combines CQT, EfficientNet, and an optimized CNN structure. GravNet achieves an impressive 76.5% accuracy and 0.85 AUC. This innovative approach offers valuable insights into harnessing deep learning models for more efficient and accurate gravitational wave detection and analysis.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241058

    House price prediction based on different models of machine learning

    Housing price prediction is a typical regression problem in machine learning. Common algorithms include linear regression, support vector regression, random forest, and extreme gradient boosting models based on integrated learning methods. Among the specific problems, different models in the specific problem will get different results. This research will compare these three models to show which model is more accurate and robust. Given the practical problem of housing price prediction, various characteristics of houses are carried out. The research will analyze and study, apply a variety of regression models, and compare the performance of the above three models on this problem, make the horizontal comparison of the advantages and disadvantages of different models, and analyze the difference in effect Line analysis and summary.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241059

    The advantages and short circuit characteristics of SiC MOSFETs

    SiC MOSFETs have exhibited considerable benefits in high-frequency, high-voltage, and high-temperature power electronics applications with outstanding material attributes as a result of the rapid advancement of power electronics technology. SiC MOSFETs’ slower short-circuit tolerance and faster switching rates provide new issues for the short-circuit prevention technology. In the opening section of the study, Si and SiC MOSFETs are compared and evaluated using various models and parametric factors. It has been demonstrated that SiC MOSFETs outperform Si MOSFETs in a variety of conditions and applications. The many SiC MOSFET short-circuit failure types as well as their underlying theories are initially explained in the paper’s main body. In addition, it examines the fundamentals of short-circuit test procedures and SiC MOSFET test circuits. The issues and limitations of the currently available SiC MOSFET short-circuit protection technology are then explored, along with factors impacting the short-circuit of SiC MOSFETs that are thoroughly examined. Lastly, the SiC MOSFET short-circuit protection technology development trend is forecasted, and potential future areas for improvement and innovation are considered. SiC MOSFET short-circuit protection technology will be enhanced and optimized to satisfy the needs of efficient and dependable power electronic systems as technology advances and application requirements expand.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241063

    Assessing and neutralizing multi-tiered security threats in blockchain systems

    Blockchain technology, the backbone of digital cryptocurrencies, has rapidly ascended as a pivotal tool in modern commerce due to its decentralized, immutable nature. It offers fresh, innovative avenues for overcoming trust issues inherent in traditional trading systems. Yet, the unique traits that make blockchain advantageous also render it vulnerable. Cybercriminals are ceaselessly innovating, devising new tactics to exploit these vulnerabilities and resulting in a surge of security incidents that have led to substantial economic losses. The increasing frequency and sophistication of these attacks jeopardize the integrity and stability of blockchain networks. This paper offers a comprehensive study of blockchain system architecture, the principles underlying various attack methods, and viable defense strategies, all organized within a hierarchical framework. Initially, the paper categorizes blockchain attacks according to the hierarchy of blockchain systems, providing a detailed exploration of the characteristics and principles behind these attacks at each level. Next, the paper summarizes existing countermeasures and proposes effective new strategies for bolstering blockchain security. The paper concludes with a recap of its key findings and outlines the landscape for future research in blockchain security.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241065

    Federated learning algorithm-based skin cancer detection

    Owing to the oversight regarding training data privacy within the realm of Deep Learning (DL), there have been inadvertent data leaks containing personal information, resulting in consequential impacts on data providers. Consequently, safeguarding data privacy throughout the deep learning process emerges as a paramount concern. In this paper, the author suggests the integration of FedAvg into the training procedure as a measure to ensure data security and privacy. In the experiments, the author first applied data augmentation to equalize the various samples in the dataset, then simulated four users using a Central Processing Unit (CPU) with four cores and established a network architecture starting with DenseNet201. Each user cloned all parameters of global model and received an equal portion of the dataset. After updating the parameters locally, the weights were aggregated by averaging and passed back to the global model. Additionally, the author introduced learning rate annealer to help the model converge better. The experimental results demonstrate that incorporating FedAvg indeed saves training time and achieves excellent performance in skin cancer classification. Despite a slight loss in accuracy, the algorithm can address privacy concerns, making the use of FedAvg highly valuable.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241066

    Distributed U-net model-based image segmentation for lung cancer detection

    Until now, in the wake of the COVID-19 pandemic in 2019, lung diseases, especially diseases such as lung cancer and Chronic Obstructive Pulmonary Disease (COPD), have become an urgent global health issue. In order to mitigate the goal problem, early detection and accurate diagnosis of these conditions are critical for effective treatment and improved patient outcomes. To further research and reduce the error rate of hospital diagnoses, this comprehensive study explored the potential of Computer-Aided Design (CAD) systems, especially utilizing advanced deep learning models such as U-Net. And compared with the literature content of other authors, this study explores the capabilities of U-Net in detail and enhances the ability to simulate CAD systems through the VGG16 algorithm. An extensive dataset consisting of lung CT images and corresponding segmentation masks, curated collaboratively by multiple academic institutions, serves as the basis for empirical validation. In this paper, the efficiency of U-Net model is evaluated rigorously and precisely under multiple hardware configurations, such as single CPU, single GPU, distributed GPU and federated learning, and the effectiveness and development of the method in the segmentation task of lung disease are demonstrated. Empirical results clearly affirm the robust performance of the U-Net model, most effectively utilizing four GPUs for distributed learning, and these results highlight the potential of U-Net-based CAD systems for accurate and timely lung disease detection and diagnosis huge potential.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241067

    Empirical validation of federated learning with YOLO v7-tiny for road sign detection: A simulation-based comparative study

    Detecting road signs is a critical component in the development of intelligent driving systems. While centralized machine learning approaches have demonstrated potential in this field, the untapped potential of Federated Learning warrants exploration. This research aims to bridge this gap by examining the feasibility of applying Federated Learning within edge Artificial Intelligence (AI) computing environments for the purpose of road sign detection. Utilizing the You Only Look Once (YOLO) v7-tiny model and a range of experimental parameters demonstrates that Federated Learning is viable and outperforms centralized approaches under specific conditions. The study's empirical analysis highlights the sensitivity of detection accuracy to varying experimental parameters. The study contributes to the existing literature by establishing the efficacy of Federated Learning in road sign detection, particularly in edge AI settings constrained by hardware limitations and privacy concerns. However, the study acknowledges limitations, including the lack of deployment on actual edge AI devices and a restricted range of experimental parameters. Future research should aim for more exhaustive experiments with broader datasets, diverse parameters, and real-world edge AI environments. These findings offer valuable insights for future implementations in intelligent automotive systems.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241070

    Machine learning with oversampling for space debris classification based on radar cross section

    Over the past few years, the likelihood of collision of space objects increases as the quantity of space debris rises. Space debris classification and identification becomes more crucial to space assets security and space situation awareness. Radar cross section (RCS), one of the essential arguments for tracking space debris, was measured by European Incoherent Scatter Scientific Association (EISCAT) and other radar systems. This study investigates the effectiveness of seven machine learning methods employed to address the classification of space objects based on RCS data from European Space Agency (ESA). To tackle the Class-imbalance issue in this study (the ratio of space debris to non-debris is approximately 5:1 in the dataset), three oversampling techniques are employed, including: Synthetic Minority Oversampling Technique (SMOTE), synthetic minority oversampling technique-support vector machine (SMOTE-SVM) and Adaptive Synthetic Sampling (ADASYN). The experiments show that, in the test set, the combination of SVM with SMOTE-SVM oversampling techniques can reach the accuracy of 99.7%, the precision of 98.7% and the recall of 99.4% which is better than the rest of models.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241073

    Hardware motion estimation methods survey and example real-time architecture design

    Motion estimation is essential in video processing, computer vision, and other related fields due to its applications in improving data compression, enhancing image/video quality, increasing computer vision task accuracy, and conserving computational resources. Typical motion estimation methods include FS (Full Search) and DS (Diamond Search). Traditional motion estimation methods will face the challenge of high complexity in practical real-time applications. Therefore, this paper focuses on fast motion estimation algorithms and hardware architecture design for real-time applications survey and research. Aiming at the need for 1920x1080@60fps processing speed, this paper explores the key ideas of FS and DS hardware structure design based on the Vivado HLS high-level synthesis tool and provides a feasible hardware scheme to meet the real-time requirements.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241075

    HiFormer: Hierarchical transformer for grounded situation recognition

    The prevalence of monitoring video is critical to public safety, but existing Object Detection and Action Recognition models are overwhelmed by camera operators, unable to identify relevant events. In light of this, Grounding Situation Recognition (GSR) provides a practical solution to recognize the events in a surveillance video. That is, GSR can identify the noun entities (e.g., humans) and their actions (e.g., driving), and provide grounding frames for involved entities. Compared with Action Recognition and Object Detection, GSR is more in line with human cognitive habits, better allowing law enforcement agencies to understand the predictions. However, the crucial issue with most existing frameworks is the neglect of verb ambiguity, that is, superficially similar verbs but have distinct meanings (e.g. buying v.s. giving). Many existing works propose a two-stage model, which first blindly predicts the verb, and then uses this verb information to predict semantic roles. These frameworks ignore the importance of noun information during verb prediction, making them susceptible to misidentifications. To address this problem and better discern between ambiguous verbs, we propose HiFormer, a novel hierarchical transformer framework with direct and comprehensive consideration of similar verbs for each image, to more accurately identify the salient verb, semantic roles, and the grounding frames. Compared with the state-of-the-art models in Grounded Situation Recognition (SituFormer and CoFormer), HiFormer shows an advantage of over 35% and 20% on the Top-1 and Top-5 verb accuracy respectively, as well as 13% on the Top-1 Noun accuracy.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241077

    Comparative analysis of Sliding Window UCB and Discount Factor UCB in non-stationary environments: A Multi-Armed Bandit approach

    The Multi-Armed Bandit (MAB) problem is a well-studied topic within stationary environments, where the reward distributions remain consistent over time. Nevertheless, many real-world applications often fall within non-stationary contexts, where the rewards from each arm can evolve. In light of this, our research focuses on examining and contrasting the effectiveness of two leading algorithms tailored for these shifting environments: the Sliding Window Upper Confidence Bound (SW-UCB) and the Discount Factor UCB (DF-UCB). By harnessing both simulated and real-world datasets, our evaluation encompasses adaptability, computational efficiency, and the potential for regret minimization. Our findings reveal that the SW-UCB is adept at swiftly adjusting to sudden shifts, whereas the DF-UCB emerges as the more resource-efficient option amidst gradual transitions. Notably, when pitted against conventional UCB algorithms within non-stationary contexts, both contenders exhibit substantial advancements. Such insights bear significant relevance to fields like online advertising, healthcare, and finance, where the capacity to nimbly adapt to dynamic environments is paramount.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241080

    Performance comparison and analysis of SVD and ALS in recommendation system

    This research predominantly focuses on the electronic segment of the Amazon dataset. In this setting, this study’s primary objective is to use this particular dataset to carry out a detailed comparative analysis of two matrix factorization-based collaborative filtering techniques, namely Singular Value Decomposition (SVD) and Alternating Least Squares (ALS). The findings stemming from this investigation reveal a notable contrast in the performance of these algorithms. Specifically, the SVD algorithm demonstrates significantly higher overall accuracy when compared to ALS. This observation suggests that in scenarios characterized by denser and smaller datasets, the SVD algorithm outperforms ALS by a considerable margin. The implications of these results underscore the significance of algorithm selection in recommender systems, emphasizing that the performance of collaborative filtering methods can vary markedly depending on the dataset’s characteristics. Additionally, this research highlights the potential limitations of ALS in scenarios similar to the one explored here, shedding light on the importance of tailoring algorithmic choices to the specific data environment. Overall, these findings contribute valuable insights to the field of recommendation systems and provide guidance for algorithm selection based on dataset properties.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241083

    A review of artificial intelligence in video games: From preset scripts to self-learning

    It is now the 21st century, with the progressive development of various science and technology, such as artificial intelligence, big data, and so on, and these ever-evolving technologies have also greatly contributed to the development of today's flourishing video game field. This paper focuses on the development of artificial intelligence applications in video games over the past two decades, from preset scripts to self-learning processes, and adopts the research method of literature review. The paper concludes that the shift from pre-scripted to self-learning AI marks a shift in video games from experiences with clear rules and controlled processes to complex, dynamic, personalized experiences. This shift brings not only new opportunities but also new challenges. In the future, we can expect to see more research and practice to explore and take advantage of more of the possibilities of self-learning AI in video games.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241084

    An overview of the application of data mining in the campus

    With the advent of campus informatization, it has been observed that the utilization of big data has the potential to facilitate the integration of education and enhance students' learning efficiency. The utilization of big data technology has been increasingly prevalent in contemporary society, leading to a growing reliance on its applications by the general population. Consequently, there is a rising need for educational platforms that can cater to this evolving demand. In the present scenario, China is persistently augmenting the development of campus infrastructure with the aim of attaining modern education objectives and ushering in a new era of educational facilities. The utilization of big data technology, its current status in contemporary usage, and the strategies for achieving widespread adoption have emerged as crucial concerns within the present wave of information technology throughout educational institutions. Simultaneously, the utilization of data mining technology enhances the learning environment for students, mitigates the constraints of campus intelligence, and offers substantial assistance for many educational endeavours inside the campus setting. The application of data mining technology for the advancement of campus management has emerged as a pivotal focal point and a significant breakthrough. This research examines the concept of the convenience zone in data mining and explores its application policy by conducting a comprehensive assessment of relevant literature. The study reveals that there is a lack of widespread awareness and adoption of big data in various contexts where it is necessary. Additionally, the potential utilization of big data at educational institutions, namely on campus, is explored.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241085

    Investigation the influence related to parameters configuration of Generative Adversarial Networks in face image generation

    Due to the excellent performance of Generative Adversarial Networks (GAN) for age regression on face images, it is particularly important to explore the effect of different parameters on model training. In this study, the origin and development of Artificial Intelligence (AI) is first discussed, from which the concept and principles of GAN are derived. This is followed by a brief introduction of the UTKface dataset used in this research, and the Conditional Adversarial Autoencoder (CAAE) framework based on the GAN technique. The division of labor and roles of the encoder, generator, and the two discriminators in the model are described. The various learning rates as well as batch size combinations attempted in this study are then illustrated, and the training results of the model are shown in the form of graphs and plots of the loss value function. A situation where the model stops learning is highlighted in the results, which is similar to pattern descent in GAN, and is shown to be characterized by the inability of the discriminator to successfully recognize it. Ultimately, drawing from the acquired outcomes, it can be deduced that employing a larger batch size serves to enhance the pace of model training. It is advisable to concurrently elevate the learning rate by an equivalent factor when augmenting the batch size, thereby ensuring a consistent trajectory for model convergence.

  • Open Access | Article 2024-03-22 Doi: 10.54254/2755-2721/49/20241086

    Generative Adversarial Networks-based solution for improving medical data quality and insufficiency

    As big data brings intelligent solutions and innovations to various fields, the goal of this research is to solve the problem of poor-quality and insufficient datasets in the medical field, to help poor areas can access to high quality and rich medical datasets as well. This study focuses on solving the current problem by utilizing variants of generative adversarial network, Super Resolution Generative Adversarial Network (SRGAN) and Deep Convolutional Generative Adversarial Network (DCGAN). In this study, OpenCV is employed to introduce fuzziness to the Brain Tumor MRI Dataset, resulting in a blurred dataset. Subsequently, the research utilizes both the unaltered and blurred datasets to train the SRGAN model, which is then applied to enhance the low-quality dataset through inpainting. Moving forward, the original dataset, low-quality dataset, and the improved dataset are each used independently to train the DCGAN model. In order to compare the difference between the produced image datasets and the real dataset, the FID Score is separately computed. The results of the study found that by training DCGAN with SRGAN repaired medical dataset, the naked eye can observe that the medical image dataset is significantly clearer and there is a reduction in Fréchet Inception Distance (FID) Score. Therefore, by using SRGAN and DCGAN the current problem of low quality and small quantity of datasets in the medical field can be solved, which increase the potential possibilities of big data in artificial intelligence filed of medicine.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated