Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 6th International Conference on Computing and Data Science

    Conference Date






    978-1-83558-425-5 (Print)

    978-1-83558-426-2 (Online)

    Published Date



    Alan Wang, University of Auckland

    Roman Bauer, University of Surrey


  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241334

    Deep learning DGA malicious domain name detection based on multi-stage feature fusion

    In recent years, cybersecurity issues have emerged one after another, with botnets extensively utilizing Domain Generation Algorithms (DGA) to evade detection. To address the issue of insufficient detection accuracy in existing DGA malicious domain detection models, this paper proposes a deep learning detection model based on multi-stage feature fusion. By extracting local feature information and positional information of domain name sequences through the fusion of Multilayer Convolutional Neural Network (MCNN) and Transformer, and capturing the long-distance contextual semantic features of domain name sequences through Bi-directional Long Short-Term Memory Network (BiLSTM), these features are finally fused for malicious domain classification. Experimental results show that the model maintains an average Accuracy of 93.26% and an average F1-Score of 93.32% for 33 DGA families, demonstrating better comprehensive detection performance compared to other deep learning detection algorithms.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241359

    Application of machine learning optimization in cloud computing resource scheduling and management

    In recent years, cloud computing has been widely used. Cloud computing refers to the centralized computing resources, users through the access to the centralized resources to complete the calculation, the cloud computing center will return the results of the program processing to the user. Cloud computing is not only for individual users, but also for enterprise users. By purchasing a cloud server, users do not have to buy a large number of computers, saving computing costs. According to a report by China Economic News Network, the scale of cloud computing in China has reached 209.1 billion yuan.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241342

    A road semantic segmentation system for remote sensing images based on deep learning

    With the rapid development of deep learning of computer science nowadays in China, many fields in academic research have experienced the powerful and efficient advantages of deep learning and have begun to integrate it with their own research. To be specific, in the field of remote sensing, the challenge of road extraction from the original images can be effectively solved by using deep learning technology. Getting a high precision in road extraction can not only help scientists to update their road map in time but also speed up the process of digitization of roads in big cities. However, until now, compared to manual road extraction, the accuracy is not high enough to meet the needs of high-precision road extraction for the deep learning model because the model cannot extract the roads exactly in complex situations such as villages. However, this study trained a new road extraction model based on UNet model by using only datasets from large cities and can get a pretty high precision in extraction for roads in big cities. Undoubtedly, this can lead to over-fitting, but its unique high accuracy ensures that the model's ability to extract roads can be well utilized under the situations of large cities, helping researchers to update road maps more conveniently and quickly in large cities.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241345

    DOA estimation technology based on array signal processing nested array

    Research on non-uniform arrays has always been a focus of attention for scholars both domestically and internationally. Part of the research concentrates on existing non-uniform arrays, while another part focuses on optimizing the position of array elements or expanding the structure. Of course, there are also studies on one-dimensional and two-dimensional DOA estimation algorithms based on array spatial shapes, despite some issues. As long as there is a demand for spatial domain target positioning, the development and refinement of non-uniform arrays will continue to be a hot research direction. Nested arrays represent a unique type of heterogeneous array, whose special geometric shape significantly increases degrees of freedom and enhances estimation performance for directional information of undetermined signal sources. Compared to other algorithms, the one-dimensional DOA estimation algorithm based on spatial smoothing simplifies algorithm complexity, improves estimation accuracy under nested arrays, and can effectively handle the estimation of signal sources under uncertain conditions. The DFT algorithm it employs not only significantly improves angular estimation performance but also reduces operational complexity, utilizing full degrees of freedom to minimize aperture loss. Furthermore, the DFT-MUSIC method greatly reduces algorithmic computational complexity while performing very closely to the spatial smoothing MUSIC algorithm. The sparse arrays it utilizes, including minimum redundancy arrays, coprime arrays, and nested arrays, are a new type of array. Sparse arrays can increase degrees of freedom compared to traditional uniform linear arrays and solve the estimation of signal source angles under uncertain conditions, while also enhancing algorithm angular estimation performance.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241363

    Multi-dimensional analysis of the impact of new energy vehicles on the urban ecological environment and prediction of future

    This study examines the development indicators of China's new energy vehicle industry using clustering and multiple regression methods. The indicators are divided into internal and external aspects: external factors, such as the degree of completeness of charging facilities, market demand, policies and regulations, and internal factors, mainly brand types and power costs. By comparing the forecasting models of its industry data, including the exponential smoothing model, grey forecasting model and Brownian forecasting model. The forecast results show that this industry in China maintains a positive development trend in the next ten years. It shows that the development prospect of electric vehicles is very bright.The population competition model is used to model the competitive situation between new energy and traditional energy vehicles, and it is concluded that new energy vehicles are replacing traditional fuel vehicles and promoting the transformation of the automotive industry to be environmentally friendly and efficient.Collect the key measures and points in time that countries have taken to target the development of this industry in China. Analysing the data on the development of the industry before and after these events, it is found that external factors, such as other countries' policies, may inhibit the industry's growth. If other countries take action to thwart this industry in China, it may temporarily break its growth or even lead to a short-term industry recession.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241374

    Integration and performance analysis of artificial intelligence and computer vision based on deep learning algorithms

    This paper focuses on the analysis of the application effectiveness of the integration of deep learning and computer vision technologies. Deep learning achieves a historic breakthrough by constructing hierarchical neural networks, enabling end-to-end feature learning and semantic understanding of images. The successful experiences in the field of computer vision provide strong support for training deep learning algorithms. The tight integration of these two fields has given rise to a new generation of advanced computer vision systems, significantly surpassing traditional methods in tasks such as machine vision image classification and object detection. In this paper, typical image classification cases are combined to analyze the superior performance of deep neural network models while also pointing out their limitations in generalization and interpretability, proposing directions for future improvements. Overall, the efficient integration and development trend of deep learning with massive visual data will continue to drive technological breakthroughs and application expansion in the field of computer vision, making it possible to build truly intelligent machine vision systems. This deepening fusion paradigm will powerfully promote unprecedented tasks and functions in computer vision, providing stronger development momentum for related disciplines and industries.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241349

    Precise positioning and prediction system for autonomous driving based on generative artificial intelligence

    Self-driving systems collect vast amounts of data through a variety of sensors, including cameras, lidar, millimeter-wave radar, and more. This data needs to be processed in real time to identify obstacles such as roads, vehicles, pedestrians and make decisions accordingly. Therefore, this paper discusses the importance of accurate positioning and prediction system in automatic driving technology, and analyzes the performance of various positioning technologies in automatic driving applications.In addition, the paper explores the application potential of AI technology in autonomous driving and the prospect of combining advanced positioning and prediction systems with generative AI. Overall, this study highlights the importance of algorithm performance improvement and artificial intelligence technology in the development of autonomous driving technology, and provides new ideas and directions for the innovation and development of intelligent transportation systems in the future.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241375

    Application of graph modeling and contrast learning in recommender system

    With the wide application of personalized recommender system in various fields, how to improve the accuracy and personalized level of recommender system has become a research hotspot. In this paper, a method of combining graph modeling and contrast learning is proposed to improve the performance of recommendation system by mining complex user project interaction and user preference. We first construct the user-project interaction graph, and extract the features of the graph structure by graph neural network (GNN) . In particular, graph convolution network (GCN) is used to update the node representation, and comparative learning is introduced to optimize the feature representation so as to improve the accuracy and personalization of recommendation. The experimental results show that the proposed method is superior to the traditional method in accuracy, recall and F 1 score. By analyzing the mechanism of combining graph modeling and contrast learning, this paper further expounds the theoretical basis and practical application of improving the performance of recommender system, and points out the limitations of existing methods and the future research direction.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241352

    Comparison of deep learning models based on Chest X-ray image classification

    Pneumonia is a common respiratory disease characterized by inflammation in the lungs, emphasizing the importance of accurate diagnosis and timely treatment. Despite some progress in medical image segmentation, overfitting and low efficiency have been observed in practical applications. This paper aims to leverage image data augmentation methods to mitigate overfitting and achieve lightweight and highly accurate automatic detection of lung infections in X-ray images. We trained three models, namely VGG16, MobileNetV2, and InceptionV3, using both augmented and unaugmented image datasets. Comparative results demonstrate that the augmented VGG16 model (VGG16-Augmentation) achieves an average accuracy of 96.8%. While the accuracy of MobileNetV2-Augmentation is slightly lower than that of VGG16-Augmentation, it still achieves an average prediction accuracy of 94.2% and the number of model parameters is only 1/9 of VGG16-augmentation. This is particularly beneficial for rapid screening of pneumonia patients and more efficient real-time detection scenarios. Through this study, we showcase the potential application of image data augmentation methods in pneumonia detection and provide performance comparisons among different models. These findings offer valuable insights for the rapid diagnosis and screening of pneumonia patients and provide useful guidance for future research and the implementation of efficient real-time monitoring of lung conditions in practical healthcare settings.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241346

    Bridging the gap in online hate speech detection: A comparative analysis of BERT and traditional models for homophobic content identification on X/Twitter

    Our study addresses a significant gap in online hate speech detection research by focusing on homophobia, an area often neglected in sentiment analysis research. Utilising advanced sentiment analysis models, particularly BERT, and traditional machine learning methods, we developed a nuanced approach to identify homophobic content on X/Twitter. This research is pivotal due to the persistent underrepresentation of homophobia in detection models. Our findings reveal that while BERT outperforms traditional methods, the choice of validation technique can impact model performance. This underscores the importance of contextual understanding in detecting nuanced hate speech. By releasing the largest open-source labelled English dataset for homophobia detection known to us, an analysis of various models' performance and our strongest BERT-based model, we aim to enhance online safety and inclusivity. Future work will extend to broader LGBTQIA+ hate speech detection, addressing the challenges of sourcing diverse datasets. Through this endeavour, we contribute to the larger effort against online hate, advocating for a more inclusive digital landscape. Our study not only offers insights into the effective detection of homophobic content by improving on previous research results, but it also lays groundwork for future advancements in hate speech analysis.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241347

    Detection and classification of wilting status in leaf images based on VGG16 with EfficientNet V3 algorithm

    The aim of this paper is to explore the importance of leaf wilting status detection and classification in agriculture to meet the demand for monitoring and diagnosing plant growth conditions. By comparing the performance of the traditional VGG16 image classification algorithm and the popular EfficientNet V3 algorithm in leaf image wilting status detection and classification, it is found that EfficientNet V3 has faster convergence speed and higher accuracy. As the model training process proceeds, both algorithms show a trend of gradual convergence of Loss and Accuracy and increasing accuracy. The best training results show that VGG16 reaches a minimum loss of 0.288 and a maximum accuracy of 96% at the 19th epoch, while EfficientNet V3 reaches a minimum loss of 0.331 and a maximum accuracy of 97.5% at the 20th epoch. These findings reveal that EfficientNet V3 has a better performance in leaf wilting status detection, which provides a more accurate and efficient means of plant health monitoring for agricultural production and is of great research significance.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241358

    Driving intelligent IoT monitoring and control through cloud computing and machine learning

    At present, cloud computing and the Internet of Things are closely integrated. IoT devices gather data through sensors and transmit it to the cloud for storage, processing, and analysis. This synergy enables efficient data management and in-depth analysis, facilitating real-time monitoring and predictive maintenance. This article explores leveraging cloud computing and machine learning for intelligent IoT monitoring and control. Edge computing, a distributed architecture, decentralizes data processing from the cloud to reduce latency and improve efficiency. This combination enhances security and drives the development of intelligent systems.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241361

    Practical applications of advanced cloud services and generative AI systems in medical image analysis

    The medical field is one of the important fields in the application of artificial intelligence technology. With the explosive growth and diversification of medical data, as well as the continuous improvement of medical needs and challenges, artificial intelligence technology is playing an increasingly important role in the medical field. Artificial intelligence technologies represented by computer vision, natural language processing, and machine learning have been widely penetrated into diverse scenarios such as medical imaging, health management, medical information, and drug research and development, and have become an important driving force for improving the level and quality of medical services. The article explores the transformative potential of generative AI in medical imaging, emphasizing its ability to generate synthetic data, enhance images, aid in anomaly detection, and facilitate image-to-image translation. Despite challenges like model complexity, the applications of generative models in healthcare, including Med-PaLM 2 technology, show promising results. By addressing limitations in dataset size and diversity, these models contribute to more accurate diagnoses and improved patient outcomes. However, ethical considerations and collaboration among stakeholders are essential for responsible implementation. Through experiments leveraging GANs to augment brain tumor MRI datasets, the study demonstrates how generative AI can enhance image quality and diversity, ultimately advancing medical diagnostics and patient care.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241362

    RNA secondary structure prediction using transformer-based deep learning models

    The Human Genome Project has led to an exponential increase in data related to the sequence, structure, and function of biomolecules. Bioinformatics is an interdisciplinary research field that primarily uses computational methods to analyze large amounts of biological macromolecule data. Its goal is to discover hidden biological patterns and related information. Furthermore, analysing additional relevant information can enhance the study of biological operating mechanisms. This paper discusses the fundamental concepts of RNA, RNA secondary structure, and its prediction.Subsequently, the application of machine learning technologies in predicting the structure of biological macromolecules is explored. This chapter describes the relevant knowledge of algorithms and computational complexity and presents a RNA tertiary structure prediction algorithm based on ResNet. To address the issue of the current scoring function's unsuitability for long RNA, a scoring model based on ResNet is proposed, and a structure prediction algorithm is designed. The chapter concludes by presenting some open and interesting challenges in the field of RNA tertiary structure prediction.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241350

    Predictive optimization of DDoS attack mitigation in distributed systems using machine learning

    In recent years, cloud computing has been widely used. This paper proposes an innovative approach to solve complex problems in cloud computing resource scheduling and management using machine learning optimization techniques. Through in-depth study of challenges such as low resource utilization and unbalanced load in the cloud environment, this study proposes a comprehensive solution, including optimization methods such as deep learning and genetic algorithm, to improve system performance and efficiency, and thus bring new breakthroughs and progress in the field of cloud computing resource management.Rational allocation of resources plays a crucial role in cloud computing. In the resource allocation of cloud computing, the cloud computing center has limited cloud resources, and users arrive in sequence. Each user requests the cloud computing center to use a certain number of cloud resources at a specific time.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241353

    Maximizing user experience with LLMOps-driven personalized recommendation systems

    The integration of LLMOps into personalized recommendation systems marks a significant advancement in managing LLM-driven applications. This innovation presents both opportunities and challenges for enterprises, requiring specialized teams to navigate the complexity of engineering technology while prioritizing data security and model interpretability. By leveraging LLMOps, enterprises can enhance the efficiency and reliability of large-scale machine learning models, driving personalized recommendations aligned with user preferences. Despite ethical considerations, LLMOps is poised for widespread adoption, promising more efficient and secure machine learning services that elevate user experience and shape the future of personalized recommendation systems.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241367

    Application and development direction of deep learning in COVID-19 identification based on Computed Tomography images

    Caused by the novel coronavirus SARS-CoV-2, COVID-19 is highly contagious via respiratory droplets from sneezing, coughing, or talking, and it can lead to severe respiratory issues, organ failure, and death. Early detection, treatment, and isolation of those at risk help slow its spread, it has challenged traditional diagnostic methods like RT-PCR due to limitations in sensitivity. CT imaging, aided by deep learning models, offers advantages in the early detection of lung abnormalities. This paper reviews the use of deep learning in analyzing CT images for COVID-19 diagnosis, highlighting advancements like image segmentation with U-Net and FPN, it also tracks the evolution of deep learning models in this domain, starting from initial applications focused on image classification and recognition to later advancements incorporating techniques like U-Net for image segmentation and feature pyramid networks. Novel techniques like multi-task learning and quantitative analysis show promise in improving accuracy. Future research focuses on enhancing training datasets, refining model architectures, and integrating methods to support clinical decision-making for COVID-19 management.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241370

    Integration of computer networks and artificial neural networks for an AI-based network operator

    This paper proposes an integrated approach combining computer networks and artificial neural networks to construct an intelligent network operator, functioning as an AI model. State information from computer networks is transformed into embedded vectors, enabling the operator to efficiently recognize different pieces of information and accurately output appropriate operations for the computer network at each step. The operator has undergone comprehensive testing, achieving a 100% accuracy rate, thus eliminating operational risks. Additionally, a simple computer network simulator is created and encapsulated into training and testing environment components, enabling automation of the data collection, training, and testing processes. This abstract outline the core contributions of the paper while highlighting the innovative methodology employed in the development and validation of the AI-based network operator.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241356

    Intelligent medical detection and diagnosis assisted by deep learning

    The integration of artificial intelligence (AI) in healthcare has led to the development of intelligent auxiliary diagnosis systems, enhancing diagnostic capabilities across various medical domains. These AI-assisted systems leverage deep learning algorithms to aid healthcare professionals in disease screening, localization of focal areas, and treatment plan selection. With policies emphasizing innovation in medical AI technology, particularly in China, AI-assisted diagnosis systems have emerged as valuable tools in improving diagnostic accuracy and efficiency. These systems, categorized into image-assisted and text-assisted modes, utilize medical imaging data and clinical diagnosis records to provide diagnostic support. In the context of lung cancer diagnosis and treatment, AI-assisted integrated solutions show promise in early detection and treatment decision support, particularly in the detection of pulmonary nodules. Overall, the integration of AI in healthcare holds significant potential for improving diagnostic accuracy, efficiency, and patient outcomes, contributing to advancements in medical practice.

  • Open Access | Article 2024-05-15 Doi: 10.54254/2755-2721/64/20241372

    The intelligent prediction and assessment of financial information risk in the cloud computing model

    cloud computing (cloud computing) is a kind of distributed computing, referring to the network "cloud" will be a huge data calculation and processing program into countless small programs, and then, through the system composed of multiple servers to process and analyze these small programs to get the results and return to the user. This report explores the intersection of cloud computing and financial information processing, identifying risks and challenges faced by financial institutions in adopting cloud technology. It discusses the need for intelligent solutions to enhance data processing efficiency and accuracy while addressing security and privacy concerns. Drawing on regulatory frameworks, the report proposes policy recommendations to mitigate concentration risks associated with cloud computing in the financial industry. By combining intelligent forecasting and evaluation technologies with cloud computing models, the study aims to provide effective solutions for financial data processing and management, facilitating the industry's transition towards digital transformation.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated