Applied and Computational Engineering

- The Open Access Proceedings Series for Conferences

Volume Info.

  • Title

    Proceedings of the 2023 International Conference on Machine Learning and Automation

    Conference Date

    2023-10-18

    Website

    https://2023.confmla.org/

    Notes

     

    ISBN

    978-1-83558-289-3 (Print)

    978-1-83558-290-9 (Online)

    Published Date

    2024-01-31

    Editors

    Mustafa İSTANBULLU, Cukurova University

Articles

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230175

    Comparative analysis of machine learning techniques for cryptocurrency price prediction

    The emergence of cryptocurrencies has revolutionized the concept of digital currencies and attracted significant attention from financial markets. Predicting the price dynamics of cryptocurrencies is crucial but challenging due to their highly volatile and non-linear nature. This study compares the performance of various models in predicting cryptocurrency prices using three datasets: Bitcoin (BTC), Litecoin (LTC), and Ethereum (ETH). The models analyzed include Moving Average (MA), Logistic Regression (LR), Autoregressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM). The objective is to uncover underlying patterns in cryptocurrency price movements and identify the most accurate and reliable approach for predicting future prices. Through the analysis, it could be observed that MA, LR, and ARIMA models struggle to capture the actual trend accurately. In contrast, LSTM and CNN-LSTM models demonstrate strong fit to the actual price trend, with CNN-LSTM exhibiting a higher level of granularity in its predictions. Results suggest that deep learning architectures, particularly CNN-LSTM, show promise in capturing the complex dynamics of cryptocurrency prices. These findings contribute to the development of improved methodologies for cryptocurrency price prediction.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230177

    Exchange rate prediction research based on LSTM-ELM hybrid model

    The fluctuation of exchange rates holds paramount importance for a country's economic and trade activities. Due to the non-stationary and nonlinear structural characteristics of exchange rate time series, accurately predicting exchange rate movements is a challenging task. Single machine learning models often exhibit lower precision in exchange rate prediction compared to combined machine learning models. Hence, employing a combined model approach aims to enhance the predictive performance of exchange rate models. Both Long Short-Term Memory (LSTM) and Extreme Learning Machine (ELM) exhibit intricate structures, making their direct integration challenging. To address this issue, an innovative weighted approach is adopted in this study, combining LSTM and ELM models and further refining the combination weights using an improved Marine Predators Algorithm. This paper encompasses both univariate and multivariate prediction scenarios, employing two distinct allocation strategies for training and testing datasets. This is done to investigate the influence of different dataset allocations on exchange rate prediction. Finally, the proposed LSTM-ELM weighted combination exchange rate prediction model is compared with SVM, Random Forest, ELM, LSTM, and LSTM-ELM average combination models. Experimental results demonstrate that the LSTM-ELM weighted combination exchange rate prediction model outperforms the others in both univariate and multivariate prediction settings, yielding higher predictive accuracy and superior fitting performance. Consequently, the LSTM-ELM weighted combination prediction model proves to be effective in exchange rate forecasting.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230178

    Review of object tracking algorithms in computer vision based on deep learning

    This paper is a survey of object tracking algorithms in computer vision based on deep learning. The author first introduces the importance and application of computer vision in the field of artificial intelligence, and describes the research background and definition of computer vision, and Outlines its broad role in fields such as autonomous driving. It then discusses various supporting techniques for computer vision, including correcting linear unit nonlinearities, overlap pooling, image recognition based on semi-naive Bayesian classification, human action recognition and tracking based on S-D model, and object tracking algorithms based on convolutional neural networks and particle filters. It also addresses computer vision challenges such as building deeper convolutional neural networks and handling large datasets. We discuss solutions to these challenges, including the use of activation functions, regularization, and data preprocessing, among others. Finally, we discuss the future directions of computer vision, such as deep learning, reinforcement learning, 3D vision and scene understanding. Overall, this paper highlights the importance of computer vision in artificial intelligence and its potential applications in various fields.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230179

    Investigation of medical image segmentation techniques and analysis of key applications

    This research examines the application of the UNet convolutional neural network model, specifically for semantic segmentation tasks in the field of medical imaging, juxtaposing its efficacy with Fully Convolutional Networks (FCNs). The primary focus of this comparative analysis rests on the performance of the UNet model on the dataset employed for this study. Surpassing our initial expectations, the UNet model demonstrated remarkable performance superiority over the FCN model on the curated dataset, thereby suggesting its potential applicability and utility for analogous tasks within the realm of medical imaging. In a surprising turn of events, our trials revealed that data augmentation techniques did not usher in a notable enhancement in segmentation accuracy. This observation was especially striking given the substantial size of the dataset employed for the experiments, encompassing as many as 1000 images. This outcome suggests that the merits of data augmentation may not always come to the fore when dealing with considerably large datasets. This intriguing discovery prompts further exploration and investigation to uncover the underlying reasons behind this observed phenomenon. Moreover, it brings to light an open-ended research query - the quest for alternative methodologies that could potentially amplify segmentation accuracy when operating on large scale datasets in the sphere of medical imaging. As the field continues to evolve and mature, it is these open questions that will continue to push the boundaries of what is possible in medical image analysis.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230180

    Utilizing stable diffusion and fine-tuning models in advertising production and logo creation: An application of text-to-image technology

    This article delves into the implementation of text-to-image technology, taking advantage of stable diffusion and fine-tuning models, in the realms of advertising production and logo design. The conventional methods of production often encounter difficulties concerning cost, time constraints, and the task of locating suitable imagery. The solution suggested herein offers a more efficient and cost-effective alternative, enabling the generation of superior images and logos. The applied methodology is built around stable diffusion techniques, which employ variational autoencoders alongside diffusion models, yielding images based on textual prompts. In addition, the process is further refined by the application of fine-tuning models and adaptation processes using a Low-Rank Adaptation approach, which enhances the image generation procedure significantly. The Stable Diffusion Web User Interface offers an intuitive platform for users to navigate through various modes and settings. This strategy not only simplifies the production processes, but also decreases resource requirements, while providing ample flexibility and versatility in terms of image and logo creation. Results clearly illustrate the efficacy of the technique in producing appealing advertisements and logos. However, it is important to note some practical considerations, such as the quality of the final output and limitations inherent in text generation. Despite these potential hurdles, the use of artificial intelligence-generated content presents vast potential for transforming the advertising sector and digital content creation as a whole.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230181

    The analysis of different authors’ views on recommendation systems based on convolutional neural networks

    Previous research revealed that the recommendation system could be based on convolutional neural networks to offer users some information which they liked to search for in the future. It is already known that the recommendation system can learn by itself, so this paper assumed that there may be other methods which can be applied to the computer program based on convolutional neural networks. This paper finds and summarizes some authors’ opinions on recommendation systems based on convolutional neural networks and summarizes their skills which are used to improve the accuracy. The findings indicated that the recommendation system is feasible and is used in many fields, and it has many functions, like analyzing emotions and summarizing users’ features, in addition to that, it can make proper judgements on users’ preferences. And the link between users and products is very worthy of being paid attention to, and there is a need to add more reference information to the testing module to make it more accurate, and to recommendation system should not be restricted by the current data set, so there should be other analysis on information such as potential emotions to improve the independence of the recommendation system.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230183

    An enhanced single-disk fast recovery algorithm based on EVENODD encoding: Research and improvements

    In the wake of rapid advancements in information technology, the need for reliable and efficient data transmission continues to escalate in importance. Channel coding, as a pivotal technology, holds significant influence over data communication. This paper delves into the fundamental technologies of channel coding and their prominent applications. Initially, the study introduces the current research status and the significance of channel coding. Following this, a comprehensive illustration and introduction to the classical coding methods of channel coding are provided. Concluding this exploration, the paper elucidates on the prevalent applications of different channel coding methodologies in scenarios such as the Internet of Things, 5G, and satellite communication, using real-world examples for clarity. Through this comprehensive research, readers gain an understanding of the key technologies underpinning channel coding, as well as the diverse applications that typify its use. By casting light on the practical implications of channel coding in contemporary technological contexts, the paper serves as a valuable resource for those seeking to deepen their knowledge and understanding of this pivotal field.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230184

    Forecasting red wine quality: A comparative examination of machine learning approaches

    This research explores the forecast of red wine quality utilizing machine learning algorithms, with a particular emphasis on the impact of alcohol content, sulphates, total sulfur dioxide, and citric acid. The original dataset, comprised of Portuguese "Vinho Verde" red wine data from 2009, was bifurcated into binary classes to delineate low-quality (ratings 1-5) and high-quality (ratings 6-10) wines. A heatmap verified the potent correlation between the chosen variables and wine quality, paving the way for their inclusion in our analysis. Four machine learning techniques were employed: Logistic Regression, K-Nearest Neighbors (KNN), Decision Tree, and Naive Bayes. Each technique was trained and assessed through resulting metrics and graphical visualizations, with diverse proportions of data assigned for training and testing. Among these techniques, Logistic Regression achieved an accuracy score of 72.08%, while KNN slightly surpassed it with an accuracy rate of 74%. The Decision Tree technique rendered the peak accuracy of 74.7%, while Naive Bayes underperformed with a score of 60.2%. From a comparative viewpoint, the Decision Tree technique exhibited superior performance, positioning it as a viable instrument for future predictions of wine quality. The capacity to predict wine quality carries significant implications for wine production, marketing, customer satisfaction, and quality control. It enables the identification of factors contributing to high-quality wine, optimization of production processes, refinement of marketing strategies, enhancement of customer service, and potential early identification of substandard wines before reaching consumers, thereby safeguarding the brand reputation of wineries.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230185

    Comprehensive evaluation and enhancement of Reed-Solomon codes in RAID6 data storage systems

    This paper provides an in-depth examination and optimization of Reed-Solomon codes within the context of Redundant Array of Independent Disks 6 (RAID6) data storage configurations. With the swift advancement of digital technology, the need for secure and efficient data storage methods has sharply escalated. This study delves into the application of Reed-Solomon codes, which are acclaimed for their unparalleled ability to rectify multiple errors, and their crucial role in maintaining RAID6 system operation even under multiple disk failures. The intricacies of Reed-Solomon codes are scrutinized, and the system's resilience in various disk failure scenarios is evaluated, contrasting the performance of Reed-Solomon codes with other error correction methodologies like Hamming codes, Bose-Chaudhuri-Hocquenghem codes, and Low-Density Parity-Check codes. Rigorous testing underscores the robust error correction capabilities of Reed-Solomon encoding in an array of scenarios, affirming its efficacy. Additionally, potential enhancement strategies for the implementation of these codes are proposed, encompassing refinements to the algorithm, the adoption of efficient data structures, the utilization of parallel computing techniques, and hardware acceleration approaches. The findings underscore the balance that Reed-Solomon codes strike between robust error correction and manageable computational complexity, positioning them as the optimal selection for RAID6 systems.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230186

    Exploring the application and performance of extended hamming code in IoT devices

    This study primarily focuses on the implementation of extended Hamming code within Internet of Things (IoT) devices and examines its impact on device performance, particularly in relation to communication protocols. The research begins by introducing and explaining the essential principles surrounding the extended Hamming code and its system. This introduction is followed by a detailed analysis of its practical application in IoT device communication and the subsequent influence on performance. Additionally, the study explores the potential role of extended Hamming code in strengthening the security measures of IoT devices. Experimental findings indicate that incorporating extended Hamming code can effectively enhance the communication efficiency of IoT devices, ensuring accurate data transmission. It also improves the overall operational efficiency of the devices and fortifies their security framework. Yet, despite these promising outcomes, the real-world application of extended Hamming code presents significant challenges. These hurdles highlight the need for continued research and exploration to maximize the potential of the extended Hamming code in the IoT domain. The study concludes with an optimistic outlook, encouraging ongoing investigation and innovation to further optimize the benefits of this code and drive advancements in IoT technology.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230187

    Examination of essential technologies and representative applications in RAID 6

    The evolution of the Internet of Things (IoT) has significantly intensified the interconnectivity between various entities. The robust advancement of information technology has ushered in a societal upswing while simultaneously triggering an exponential increase in data volumes. Consequently, the efficiency of access to storage systems and the reliability of data are severely challenged. Researchers are actively seeking efficient solutions to these challenges. The RAID storage system, with its commendable access performance, excellent scalability, and relative affordability, has become a preferred choice for the storage servers of numerous enterprises. This paper delves into the workings of RAID 6, erasure codes, and capacity expansion, thereby exploring the feasibility of various capacity expansion strategies. Effectively, a well-designed expansion scheme can mitigate issues related to insufficient storage capacity. Simultaneously, the configuration of the code plays a crucial role in determining the expansion time and consequently influences expansion efficiency. Overall, the information and findings presented in this study contribute to enhancing our understanding and management of storage systems in an increasingly data-intensive era.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230188

    Research on feature coding theory and typical application analysis in machine learning algorithms

    Nowadays, the world is still in the environment of economic depression. In order to promote economic recovery, improve Relations of production and production efficiency, stimulate consumption expansion and upgrading, and accelerate industrial transformation and upgrading, problems such as industrial upgrading need to be solved urgently. Solving the above problems requires more useful tools, and artificial intelligence is one of them. Machine learning is the key to distinguishing artificial intelligence from ordinary program code. Unlike people learning knowledge, machine learning has its own unique language algorithms and behavioral logic. Machine learning, as a technology active in the field of artificial intelligence in recent years, specializes in studying how computers learn, simulate and realize part of human learning behavior, so as to provide data mining and behavior prediction for humans, to obtain new knowledge or skills, or to strengthen the original basic ability of machines. In this study, a variety of common coding algorithms and learning strategies in machine learning are discussed, supervised learning algorithms are selected as examples in the learning strategies, models are further selected and evaluated for a variety of algorithms, and parameters are adjusted and performance is analyzed. As for the theoretical analysis in the research, the paper makes a tentative application in the three fields of housing price, physical store sales and digital recognition, explores and selects the corresponding application method in the appropriate scenario, and expands the application field of machine learning.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230189

    Research and application exploration of WiFi-based identification technology in the context of next-generation communication

    With the rapid advancement of wireless networks and the widespread use of WiFi technology, there is a growing interest in utilizing WiFi information for identification purposes. These emerging identification technologies in the new generation have had a profound impact on various aspects of modern life, such as smart furniture research, intelligent security systems, and human-computer interaction. This paper delves into the research and application exploration of WiFi-based identification technology within the context of the new generation. It introduces the knowledge and working principles of Channel State Information (CSI), and discusses the fundamental technologies of Multiple-Input Multiple-Output (MIMO) and Orthogonal Frequency Division Multiplexing (OFDM) in detail. Additionally, it explores current applications and presents a promising future for identification technology from a developmental perspective. By examining the advantages and challenges associated with WiFi-based identification technology, this essay sheds light on its potential impact in various domains. Understanding and exploring this technology are crucial as it has the potential to enhance user experiences, optimize resource allocation, and facilitate intelligent and adaptive systems in the new generation.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230190

    Comparison of different algorithms in Reversi AI

    Minimax and alpha-beta pruning have been widely applied in AI for various strategic board games, the utilization of greedy algorithms in this context has received less attention. The Greedy algorithms aim to make locally optimal choices at each step, exploiting immediate gains. This research aims to reveal the potential benefits and limitations of applying greedy algorithms in Reversi gaming AI, specifically through a comparison with the Minimax algorithm. A series of AI versus AI matches were conducted to evaluate and compare the performance of the three different AI algorithms. The objective was to assess their gameplay strategies and decision-making abilities by measuring their average execution time and win rates. Relevant codes and experiments will be carried out in a C++ environment, and the shown codes in this article will only have pseudocode and comments. In conclusion, the findings of this study indicate that the Greedy Algorithm is not a superior alternative to the Minimax Algorithm in competitive scenarios, particularly with increased searching depth. However, greedy algorithms still have weak competitiveness with reduced computational time. For future research, concentrating on improving the performance of the Greedy Algorithm by incorporating more advanced heuristics or adaptive strategies maybe a good choice. Additionally, combining the strengths of both the Greedy Algorithm and the Minimax Algorithm could be a promising direction for further investigation.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230191

    Stochastic simulation methods in the study of cell rhythm

    Stochastic simulation methods play a crucial role in the study of cellular rhythms. Based on the characteristics of stochastic algorithms, we can more accurately capture the noise effects existing in biological systems and explore their impact on cell rhythms. The findings from stochastic simulation methods shed light on how cell rhythms operate at the molecular level, and this paper presents them inductively for different algorithm types, enabling a deeper understanding of their characteristics. Furthermore, based on the analysis of existing studies, this paper finds that a stochastic simulation approach that considers spatial heterogeneity and intercellular coupling helps reveal the design principles and functional characteristics of the cellular rhythmic system. However, existing stochastic methods also have limitations, including the arbitrariness of parameters and ignoring spatial features. This paper argues that future improvements should focus on integrating quantitative data, accounting for spatial effects, and increasing computational efficiency. These enhancements will contribute to a comprehensive understanding of the generation of cellular rhythms and their importance in biological processes.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230194

    Biodegradable materials in tissue engineering and regenerative medicine

    Regenerative medicine signifies that medicine will step into a new era of reconstruction, manufacturing, and replacement of tissues and organs. At the same time,the mankind face many challenges which the development of medicine brings. Along with the progress and development of medical science and technology and the concepts of tissue engineering and regenerative medicine, it has a significant role in advancing the development of human medical technology and future tissue and organ regeneration and repair. This paper will introduce the concept of biodegradable materials and categorize biodegradable bio-materials into synthetic biodegradable bio-materials such as polylactic acid derivatives, copolymers of polyhydroxyacetic acid (PHA) and polylactic acid (PLA), and naturally occurring biodegradable bio-materials such as collagen, chitosan, as well as including their applications and research in tissue engineering. Finally, we make a beautiful outlook on the role of regenerative medicine in the future of human life for human health management and repair.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230195

    Flexible wearable biosensor for physiological parameters monitoring during exercising

    With the continuous improvement of the quality of life, people's demand for sports is gradually increasing, therefore, the monitoring of physiological parameters during exercise not only has the effect of improving the performance of athletes, but also provides a guarantee of the quality of training for the majority of sports enthusiasts.Monitoring blood oxygen saturation is essential for assessing a person's oxygen levels, and it can be achieved through electrochemical or optical methods. In this paper, we focus on the optical method, which utilizes a pulse oximeter equipped with an infrared light source and a light sensor. This device measures the absorption of light by hemoglobin, allowing the calculation of oxygen saturation. Flexible temperature biosensors are designed to measure environmental or object temperatures using thermosensitive materials or thermistors. The flexibility of these sensors allows them to adapt to irregular surfaces and curved shapes, making them suitable for monitoring human body temperature during exercise. Wearable heart rate monitors use various detection techniques, such as photoplethysmography (PPG) and electrocardiogram (ECG). PPG measures changes in blood volume using LEDs and photodiodes, while ECG detects the heart's electrical impulses. These monitors are widely used in fitness and sports settings, enabling users to track their cardiovascular health, adjust exercise intensity, and set performance goals. The incorporation of additional accelerometers or gyroscopes enhances heart rate monitoring accuracy during exercise, filtering out motion-induced noise for reliable data.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230196

    Biosensors for ocean acidification detection

    Ocean acidification is a global environmental problem that significantly impacts Marine ecosystems and biodiversity. The traditional chemical analysis method has the problems of complex equipment and high cost in ocean acidification monitoring. In recent years, fluorescent protein biosensor technology, as an innovative monitoring method, has provided a new solution for the real-time detection of ocean acidification. Compared with traditional chemical analysis methods, fluorescent protein biosensors have the advantages of simple operation, high sensitivity and low cost. Current studies have demonstrated the potential of fluorescent protein biosensors for ocean acidification monitoring. The researchers designed a variety of fluorescent protein biosensors and conducted indoor and outdoor experimental validation. These results show that fluorescent protein biosensors can detect ocean acidification quickly and accurately and maintain stable performance under different environmental conditions. Further studies are needed to verify the consistency and reliability of fluorescent protein biosensors and traditional chemical analysis methods for ocean acidification monitoring. Future research directions include further improving the performance of the fluorescent protein biosensor, increasing its sensitivity and stability, and verifying its application in real Marine environments. This will help establish a better monitoring network for ocean acidification and provide a reliable scientific basis for Marine environmental protection and management decisions. The development and application of fluorescent protein biosensor technology will provide important support and guidance for us to better understand the impact of ocean acidification.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230199

    Enhanced diffusion model based on similarity for handwritten digit generation

    In recent years with the rise of deep learning, there has been a major revolution in image generation technology. Deep learning models, especially the diffusion model. have brought about breakthrough progress in image generation. Various deep generation models have recently demonstrated a wide variety of high-quality sample data patterns. Although image generation technology has achieved remarkable achievement. There are still challenges and issues, such as quality control in generated images. In order to improve the robustness and performance of diffusion model in image generation, an enhanced diffusion model based on similarity is proposed in this paper. Based on the original diffusion model, the similarity loss function is added to narrow the semantic distance between the original image and the generated image, so that the generated image is more robust. Extensive experiments were carried out on the MINIST dataset, and the experimental results showed that compared with the other generation models, the enhanced diffusion model based on similarity obtained the best scores of IS=31.61 and FID=175.21, which verified the validity of the similarity loss.

  • Open Access | Article 2024-01-31 Doi: 10.54254/2755-2721/32/20230200

    A study of human pose estimation in low-light environments using YOLOv8 model

    Human pose estimation is a formidable task in the field of computer vision., often constrained by limited training samples and various complexities encountered during target detection, including complex backgrounds, object occlusion, crowded scenes, and varying perspectives. The primary objective of this research paper is to explore the performance disparities of the recently introduced YOLOv8 model in the context of human pose estimation. We conduct a comprehensive evaluation of six different models with varying complexities on the same low-light photograph to assess their precision and speed. The objective is to determine the suitability of each model for specific environmental contexts. The experimental results reveal that our findings demonstrate a partial regression in accuracy for the yolov8s-pose and yolov8m-pose models when tested on our sampled images. The increase in model layers indicates enhanced complexity and expressive power, while additional parameters signify improved learning capabilities at the expense of increased computational resource requirements.

Copyright © 2023 EWA Publishing. Unless Otherwise Stated