Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 6th International Conference on Computing and Data Science

    Conference Date






    978-1-83558-393-7 (Print)

    978-1-83558-394-4 (Online)

    Published Date



    Alan Wang, University of Auckland

    Roman Bauer, University of Surrey


  • Open Access | Article 2024-04-17 Doi: 10.54254/2755-2721/57/20241348

    Dynamic resource allocation for virtual machine migration optimization using machine learning

    This article delves into the importance of applying machine learning and deep reinforcement learning techniques in cloud resource management and virtual machine migration optimization, highlighting the role of these advanced technologies in dealing with the dynamic changes and complexities of cloud computing environments. Through environment modeling, policy learning, and adaptive enhancement, machine learning methods, especially deep reinforcement learning, provide effective solutions for dynamic resource allocation and virtual intelligence migration. These technologies can help cloud service providers improve resource utilization, reduce energy consumption, and improve service reliability and performance. Effective strategies include simplifying state space and action space, reward shaping, model lightweight and acceleration, and accelerating the learning process through transfer learning and meta-learning techniques. With the continuous progress of machine learning and deep reinforcement learning technologies, combined with the rapid development of cloud computing technology, it is expected that the application of these technologies in cloud resource management and virtual machine migration optimization will be more extensive and in-depth. Researchers will continue to explore more efficient algorithms and models to further improve the accuracy and efficiency of decision making. In addition, with the integration of edge computing, Internet of Things and other technologies, cloud computing resource management will face more new challenges and opportunities, and the application scope and depth of machine learning and deep reinforcement learning technology will also expand, opening new possibilities for building a more intelligent, efficient and reliable cloud computing service system.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241306

    GlassOnly: Transparent object dataset for object detection

    Although datasets like ImageNet have a variety of classes for object detection, there are not many samples of transparents objects like glass walls, which are fully implemented in shopping malls and houses. The ignorance of transparent objects in object detection may cause potential danger to humans as the machines would not consider glasses as obstacles in path planning. Therefore, GlassOnly collected samples from malls and apartments and built a dataset for glass walls only. The dataset sample simulates a robot walking in human living environments from a perspective of the machine itself, providing data for studying detecting transparents objects.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241307

    Enhancing efficiency and user-centricity in architectural remodeling: A comprehensive system design for structural renovation

    The home renovation industry has witnessed remarkable growth, driven by shifts in lifestyle necessitating adjustments in living spaces. This paper addresses critical gaps in the domain of architectural remodeling, with a particular focus on improving efficiency, referenceability, and user-centricity in structural remodeling. This research introduces a system design tailored for structural remodeling within house renovation, catering to both comprehensive and partial projects to facilitate the creation of structurally viable renovation options and optimizing them to align precisely with user requirements. The proposed system's accessibility and consideration of architectural factors set it apart. While offering substantial benefits, the system has limitations, such as the exclusion of interior furnishing styles in output solutions. In conclusion, contributes to the improvement of remodeling projects and offers a promising approach, particularly in the early stages of these endeavors.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241308

    Dual attention-enhanced SSD: A novel deep learning model for object detection

    Object detection is a fundamental task in computer vision with significant implications across various applications, including autonomous driving, surveillance, and image understanding. The accurate and efficient detection of objects within images is crucial for enabling machines to interpret visual information and make informed decisions. In this paper, we present an enhanced version of the Single Shot MultiBox Detector (SSD) for object detection, leveraging the concept of dual attention mechanisms. Our proposed approach, named SSD-Dual Attention, integrates dual attention layers into the SSD framework. These dual attention layers are strategically positioned between feature maps and prediction convolutions, enhancing the model's ability to capture informative feature representations across a wide range of object scales and backgrounds. Experimental results on the PASCAL VOC 2007 and 2012 datasets validate the effectiveness of our approach. Notably, SSD-Dual Attention achieves an impressive mean Average Precision (mAP) of 78.1%, surpassing the performance of SSD models enhanced with attention mechanisms such as SSD-ECA, SSD-CBAM, SSD-Non-local attention and SSD-SE attention, as well as the original SSD. These results underscore the enhanced accuracy and precision of our object detection system, marking a substantial advancement in the field of computer vision. Code is available at

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241309

    Action-Aware Vision Language Navigation (AAVLN): AI vision system based on cross-modal transformer for understanding and navigating dynamic environments

    Visually impaired individuals face great challenges with independently navigating dynamic environments because of their inability to fully comprehend the environment and actions of surrounding people. Conventional navigation approaches like Simultaneous Localization And Mapping (SLAM) rely on complete scanned maps to navigate static, fixed environments. With Vision Language Navigation (VLN), agents can understand semantic information to expand navigation to similar environments. However, both cannot accurately navigate dynamic environments containing human actions. To address this challenge, we propose a novel cross-modal transformer-based Action-Aware VLN system (AAVLN). Our AAVLN Agent Algorithm is trained using Reinforcement Learning in our Environment Simulator. AAVLN’s novel cross-modal transformer structure allows the Agent Algorithm to understand natural language instructions and semantic information for navigating dynamic environments and recognizing human actions. For training, we use Reinforcement Learning in our action-based environment simulator. We created it by combining an existing simulator with our novel 3D human action generator. Our experimental results demonstrate the effectiveness of our approach, outperforming current methods on various metrics across challenging benchmarks. Our ablation studies also highlight that we increase dynamic navigation accuracy with our Vision Transformer based human action recognition module and cross-modal encoding. We are currently constructing 3D models of real-world environments, including hospitals and schools, for further training AAVLN. Our project will be combined with Chat-GPT to improve natural language interactions. AAVLN will have numerous applications in robotics, AR, and other computer vision fields.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241310

    Keywords-based conditional image transformation

    In recent years, Generative Adversarial Networks (GANs) and their variants, such as pix2pix, have occupied a significant position in the field of image generation. Despite the impressive performance of the pix2pix model in image-to-image transformation tasks, its reliance on a large amount of paired training data and computational resources has posed a crucial constraint to its broader application. To address these issues, this paper introduces a novel algorithm, Keywords-Based Conditional Image Transformation (KB-CIT). KB-CIT dynamically extracts keywords from the input grayscale images to acquire and generate training data, thus avoiding the need for a large amount of paired data and significantly improving the efficiency of image transformation. Experimental results demonstrate that KB-CIT performs remarkably well in image colorization tasks and can generate high-quality colored images even with limited training data. This algorithm not only simplifies the data collection process but also exhibits significant advantages in terms of computational resource requirements, data utilization efficiency, and personalized real-time training of the model, thereby providing new possibilities for the widespread application of the pix2pix model.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241312

    Integrates Differential Gene Expression analysis and deep learning for accurate and robust prostate cancer diagnosis

    The challenge of diagnosing complex diseases and increasing human lifespan is a pressing task. Traditional methods, relying on visual characteristics like ultrasound and angiography, often struggle to detect cancer in its early stages, limiting diagnostic accuracy due to the intricate and nonlinear nature of diseases. From the perspective of gene expression, detecting cancer offers a more robust and effective approach due to its ability to directly assess the genetic activity within cells. In this study, we present the development of a prostate cancer feature selection method based on differentially expressed genes (DEGs). Utilizing datasets from the Gene Expression Omnibus (GEO) and The Cancer Genome Atlas (TCGA), we meticulously curated data for both model training and testing, implementing stringent filtering criteria based on p-value and fold change. Our study identifies a panel of 220 genes with substantial potential for prostate cancer detection. We then construct an ANN model for the diagnosis of the disease, whose accuracy is 0.78±0.01, which is more effective than other models like Ridge Classifiers, Logistic Regression, Naive Bayes Regression and Decision Trees. The average accuracy of these classifiers is 0.73±0.01. Notably, these genes also demonstrate exceptional performance across other various classifiers, indicating their robustness and effectiveness without dependence on specific models. The credibility is validated by comparison to random genes, and adaptability by using pancreatic cancer data from GEO. The Gene Otology analysis also verifies the feasibility of such method. This panel establishes a solid foundation for advancing clinical diagnostics of prostate cancer. This framework holds potential to significantly transform prostate cancer screening by offering strong resilience and precision across multiple classification methods.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241313

    Unraveling the characteristics of Parkinson's Disease through neuroimaging: Insights and future directions

    Parkinson's disease (PD) is a neurodegenerative disease with a high degree of patient heterogeneity, and as of 2016 there are approximately 6.1 million PD patients worldwide. PD has a high proportion of patients with intermediate to advanced disease and a high rate of disability, and clinical diagnosis and treatment are difficult due to the lack of neuromarkers to identify disease states and the inability to quantify the effects of treatment in PD. In recent years, many researchers have explored specific changes in brain activity in PD based on electrophysiological and neuroimaging data. Electroencephalography, with its high temporal resolution and rich frequency domain information, and functional magnetic resonance imaging, with its high spatial resolution, have become the main tools to characterize the state of brain activity in PD in recent years. This paper analyses some of the available data on the characteristics of PD through two neuroimaging techniques commonly used in the disease. It concludes with the prospect of being able to establish uniform criteria for determining PD.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241314

    Enhancing automotive interior automation through face analysis techniques

    In contemporary society, the progress and widespread adoption of automotive automation, particularly in autonomous driving technology, have been remarkable. The rapid evolution of this technology has equipped numerous vehicles with high-stability autonomous capabilities, significantly enhancing convenience for users. However, as autonomous driving continues its developmental trajectory, it is crucial to give equal attention to the advancement of interior automation technologies within vehicles. Exploring avenues that make these automated technologies smarter, safer, and more conducive to delivering heightened convenience and support to users is imperative. This realm presents a domain ripe with potential for innovation and substantial advancements that cater to the evolving needs and expectations of modern transportation.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241315

    Research on confusing responses based on ChatGPT

    Recently, Artificial intelligence and Machine learning have changed the nature of scientific inquiry, with chatbots moving from rule-based technology to AI technology. Open AI’s ChatGPT is a prominent AI language model that has attracted great interest and attention since its launch. To better understand its role and influence in social life, it is very necessary to know its work carefully. This paper briefly introduces the development history, current situation, and future development of the ChatGPT, discusses its popular application fields, and analyzes its pros and cons. On this basis, some problems that still exist in the application are focused on. This study uses the case analysis method to emphasize the confusing responses of ChatGPT in multiple fields, telling people that while enjoying its powerful functions, should still pay attention to its side effects and risks, the most obvious one is deceptive behavior, providing users with misleading or fabricated information may further lead to other social problems. This study speculates on the future development of ChatGPT and proposes future development directions. Generally, by rationally utilizing the functions of ChatGPT, its potential in various fields can be better released, thereby promoting the advancement of conversational AI and its transformative impact on society.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241316

    Comparison and analysis of prediction accuracy between traditional machine learning algorithms and XGBoost algorithm in music emotion classification

    Music emotion classification refers to determining the emotional types of music, such as happiness, sadness, anger, and passion, based on aspects like rhythm, melody, and tone. The development of artificial intelligence can be applied to music recommendation by employing machine learning algorithms to ascertain emotional attributes of music, enabling precise music suggestions for users to enhance their musical experience. This paper compares and analyzes the classification performance of traditional Random Forest machine learning algorithms and the XGBoost model on a Turkish music emotion dataset. We selected a comprehensive dataset from Kaggle, containing a vast array of music samples and their corresponding emotional labels, making it an extensive music emotion classification dataset. From the experimental results, it's evident that the traditional Random Forest machine learning model outperforms the XGBoost model in terms of accuracy, precision, and recall. The accuracy of the traditional Random Forest machine learning model stands at 80.8%, whereas the XGBoost model's accuracy is 75%. The recall rate for the traditional Random Forest machine learning model is 80.8%, while for XGBoost, it's 77.2%. The F1 score for the traditional Random Forest machine learning model is 80.5%, whereas for XGBoost, it's 75.3%. These parameters indicate that the traditional Random Forest machine learning model exhibits superior predictive performance in music emotion classification. However, the XGBoost model possesses its own advantages, such as faster learning and prediction speeds, high accuracy, and strong generalization capabilities. In summary, the traditional Random Forest machine learning model demonstrates better robustness and interpretability, effectively handling samples with noise and missing data, thus finding widespread practical application. On the other hand, the XGBoost model excels in rapid training and prediction, coupled with higher accuracy and versatility, making it advantageous in dealing with large datasets. The research outcomes of this paper hold significant importance for the study and application of music emotion classification. The experimental results presented herein offer valuable insights for researchers and practitioners, aiding them in selecting appropriate machine learning models, optimizing, and adjusting them to achieve the best classification results.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241317

    Multi-Label Sampling based on Label Imbalance Rate and Neighborhood Distribution

    Existing multi-label classification algorithms often assume that label distribution in the training set is balanced, but practical datasets frequently exhibit significant label imbalance. This imbalance affects the learning and generalization performance of the classifiers. To address the problem of label imbalance in multi-label classification, this paper proposes a new synthetic oversampling algorithm, named Multi-Label Synthetic Oversampling based on Label Imbalance Rate and Neighborhood Distribution (MLSIN). This algorithm synthesizes new samples by considering both the imbalance rate of labels and the distribution of samples in their neighborhood, aiming to improve the classifier’s performance on minority labels. The rest of this chapter first introduces the evaluation metrics for multi-label classification effectiveness. Then it defines and computes the degree of label imbalance, describes the calculation of imbalance weights, and proposes a sample type correction penalty strategy, detailing the algorithm's process for selecting base and auxiliary samples., and validates the proposed method on public datasets and summarizes the experimental results.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241318

    Data mining in economics: Unraveling merchant transactions for strategic insights

    This paper explores the application of data analysis and mining techniques in the domain of economics, with a specific focus on understanding merchant transaction characteristics. The study delves into fundamental theories, including classification tasks, regression missions, and relevance analysis, showcasing the versatility of these techniques in addressing economic challenges. Neural network models, such as Multilayer Perceptron and Auto Encoder, are introduced for handling complex economic data. The research emphasizes the importance of data mining in extracting valuable insights from real-world merchant transaction data, leading to the creation of the Merchant Transaction Feature Standard Database. A detailed data preprocessing method tailored to merchant transaction data is presented, addressing issues such as missing data, noise reduction, data integration, and transformation. The unique characteristics of real merchant transaction data, including sensitivity, concentration, sparsity, and the lack of label diversity, are outlined. The study concludes by highlighting the potential benefits of employing data mining techniques in optimizing marketing, merchant management, and risk management strategies. Overall, this paper contributes to advancing the understanding and practical applications of data analysis and mining techniques in economics.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241319

    AI-driven transformation: From economic forecasting to strategic management

    The convergence of Artificial Intelligence (AI) technologies and various sectors within the global economy has brought forth transformative changes, shaping decision-making processes, financial practices, and economic forecasting. This comprehensive paper explores the multifaceted applications of AI in diverse domains, ranging from economic forecasting and financial markets to accounting, auditing, strategic management, and crisis management. Through empirical analysis, case studies, and scholarly insights, we uncover how AI technologies are driving efficiencies, improving predictions, enhancing decision-making, and fostering innovation across these sectors. This study offers valuable academic contributions by delving into the specific methodologies, real-world examples, and impact assessments of AI adoption, shedding light on its implications for practitioners, policymakers, and researchers.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241321

    Does digital transformation benefit innovation in manufacturing firms? Empirical evidence based on data mining and textual analysis methods

    With the increasing spread of the digital economy and the application of digital technologies, more and more manufacturing enterprises have begun to improve the efficiency of digital transformation with the help of big data and other digital technologies to promote the high-quality development of enterprises. Through the open big data platform, manufacturing enterprises can collect data and information reflecting the needs of users in a timely manner to achieve real-time optimization of the production process and timely detection of problems to a certain extent, thereby improving the ability to research and innovation. Through the establishment of econometric models and empirical tests, we also find that the education level of managers affects the process of digital transformation and influences the impact of digital transformation on innovation.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241322

    Navigating the green future: The transformative role of big data in climate change and environmental sustainability

    This article explores the profound impact of big data in the realm of climate change and environmental management. It delves into various aspects such as climate modeling and predictions, the assessment of climate change impacts, the role of data in policy making, ecosystem health monitoring, and strategies for sustainable resource management. Employing case studies and specific examples, the article demonstrates how big data analytics are transforming our approach to understanding and mitigating environmental challenges. It highlights the use of advanced computational techniques in modeling climate phenomena, assessing biodiversity shifts, optimizing waste management, and enhancing agricultural practices. The article underscores the significance of integrating big data into environmental strategies to facilitate informed decision-making and develop sustainable solutions.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241323

    Brand reputation enhancement through data mining in social media

    The rise of social media has provided a large range of data such as user behavior data, social relationship data and so on, which can be used for business analysis, and the use of analysis tools can obtain complex, extensive and continuously followed social media data to help enhance the reputation of brand. In this paper, the definition and potential relationship between data analysis, social media and brand reputation and how the analysis results can help enhance business reputation will be introduced. This paper uses the research methodology of review, collecting and analyzing a large amount of literature, and draws a series of effective conclusions, and concludes that data mining of social media can effectively help business analysis to expand brand reputation, indicating that data mining analysis brings a significant improvement in effectiveness of marketing activities[1].

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241324

    Use of proper sampling techniques to research studies

    This article extensively examines several sampling strategies, emphasising probability and non-probability sampling approaches. Sampling is essential in research as it influences the degree to which research findings accurately represent and can be applied to a larger population. This article provides a definition of probability sampling and explores its different approaches. The text highlights the benefits of probability sampling, such as its capacity to guarantee impartial selection and facilitate statistical inference. Furthermore, this article delves into the constraints and factors to be taken into account regarding probability sampling, including the need for resources and the possibility of mistakes in sample frames. Subsequently, this paper examined non-probability sampling strategies and investigated methodologies such as convenience sampling, purposive sampling, and quota sampling. This article examines the fundamental concepts of non-probability sampling and highlights its specific applicability in investigating groups that are rare or challenging to access. A comprehensive examination was undertaken to assess the benefits and drawbacks of non probability sampling, with a particular focus on problems such as researcher bias and the absence of statistical generalizability. This article offers a thorough examination of probability and non-probability sampling approaches, allowing scholars to acquire a more profound comprehension of the benefits, constraints, and suitable uses of each approach. It acknowledges the significance of choosing the most suitable sampling method according to research goals, available resources, and the attributes of the target population. In conclusion, this work offers significant resources for researchers who wish to make well-informed choices on sampling procedures in their research.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241325

    Advancements and challenges in AI-driven language technologies: From natural language processing to language acquisition

    This paper explores the evolution and impact of artificial intelligence (AI) in the realm of language technologies. We trace the historical development of language models in AI, starting from the rule-based systems of the 1960s to the sophisticated neural networks of today. The current state-of-the-art technologies, particularly transformer-based models like OpenAI's GPT series, are examined for their capabilities and limitations. We delve into the role of AI in language acquisition and learning, highlighting AI-driven language teaching tools such as Duolingo and Babbel, and discuss their effectiveness and challenges. Furthermore, the paper explores the significant contributions of AI in second language acquisition research, including the development of predictive models and sophisticated learner profiles. Ethical considerations and challenges, such as data privacy and potential biases, are also addressed. We discuss advancements in natural language processing (NLP) applications like text and sentiment analysis, speech recognition and generation, and machine translation, along with their cross-linguistic challenges. The conclusion envisions future directions for AI in language technologies, emphasizing the need for multimodal inputs, efficiency, and enhanced interpretability.

  • Open Access | Article 2024-04-30 Doi: 10.54254/2755-2721/57/20241326

    Harmonizing form and function: The evolution, principles, and future of interactive design

    This paper explores the evolution of interactive design, tracing its journey from the early stages of computing to the present day. It begins with a historical overview, highlighting the transition from function-focused, command-line interfaces to more accessible graphical user interfaces (GUIs) and the emergence of Human-Computer Interaction (HCI) as a vital multidisciplinary field. The paper then delves into the significant technological advancements that have reshaped interactive design, such as touch interfaces, virtual and augmented reality, and the integration of artificial intelligence. Furthermore, it discusses the paradigm shift towards user-centric design, emphasizing the importance of understanding user needs and preferences in creating intuitive, enjoyable, and inclusive digital experiences. The principles of usability, aesthetics, emotional engagement, and contextual design are explored as critical components of effective interactive design. The paper concludes by highlighting the ongoing evolution of interactive design and its impact on creating user-centered digital experiences.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated