Applied and Computational Engineering
- The Open Access Proceedings Series for Conferences
Proceedings of the 5th International Conference on Computing and Data Science
2023-07-14
978-1-83558-021-9 (Print)
978-1-83558-022-6 (Online)
2023-10-23
Marwan Omar, Illinois Institute of Technology
Roman Bauer, University of Surrey
Alan Wang, University of Auckland
Medical image segmentation can provide valuable information for doctors, it has important research value in the medical field. Meanwhile, U-Net, as the fundamental networks for such tasks, brings a substantial improvement in the segmentation performance of traditional medical images. With the increasingly widespread use of U-Net, researchers have designed various U-Net variants according to different task requirements. However, most of the current summaries of U-Net variants are divided according to the direction of network applications, and the structural relationship between the variant networks and U-Net is not elaborated. Therefore, this paper classifies U-Net variants according to their network framework by elaborating the principles of U-Net structure. According to the U-Net network structure, it is divided into three main categories: backbone improvement, module addition and cross-network fusion. Further, the characteristics, advantages and disadvantages of different categories of variants are introduced, and the directions of the variants for U-Net optimization are analyzed. Finally, the article summarizes the current development direction of U-Net variants and provides an outlook on the future directions that can continue to be optimized.
Cat species recognition holds significant potential in many fields. The primary objective of this research is to develop an automated algorithm for recognizing the presence of cats in images. The application prospects of this algorithm are diverse and include security, image search, and social media. Hence, this research has considerable practical value in various domains. In this study, we propose a cat image recognition algorithm based on the PyTorch, with ResNet50 as the foundational network architecture, and an attention mechanism (Efficient Channel Attention) integrated into the model for improved performance. We first introduced the Resnet network, and then introduced the combination of attention mechanism and Resnet in detail The proposed model achieved a 92.37% accuracy rate in classifying the 12 cat species, demonstrating its efficacy in accurately classifying and recognizing the collected images. The research conclusion of this paper has certain reference value.
The protection of user data interests is of utmost significance in this day and age, when everyone's private life details and possessions are being digitised and stored in the cloud. Data now rules the world. This paper reviews the relevant literature in order to study the balance of rights in the protection of user data interests in the era of big data. It then suggests various countermeasures for the problem that it identifies. According to the findings of this study, the key factors contributing to the imbalance in the protection of user data rights are the progression of science and technology, the increase in the value of user data, and a lack of awareness regarding the protection of user personal data rights.
Deep Q-learning Network (DQN) is an algorithm that combines Q-learning and deep neural network, its model can adopt high-dimensional input and low-dimensional output. As a deep reinforcement learning algorithm proposed ten years ago, its performance on some Atari games has surpassed all previous algorithms, even some human experts, which fully reflects DQN’s high research value. The tuning of hyperparameters is crucial for any algorithm, especially for those with strong performance. The same algorithm can produce completely different results when using different sets of hyperparameters, and suitable values can considerably improve the algorithm. Based on the DQN we implement, we test on number of episodes, size of replay buffer, gamma, learning rate and batch size with different values. In each round of experiments, except for the target hyperparameter, all others use default values, and we recorded the impact of these changes on training performance. The result indicates that as the number of episodes continues to increase, the performance improves steadily and degressively. The same conclusion is also applicable to the size of replay buffer, while other hyperparameters need to be given values to have optimal performance.
Mobile robots are used extensively across a variety of industries and fields. How to discover a start-to-end path without colliding has become a hot topic in recent years due to the complexity and uncertainty of the workplace. In various environments, a path planning technique should demonstrate high efficiency and speed. And this can reduce the energy consumption of the robots and greatly increase their working efficiency. This paper will conclude the presently popular path planning algorithm. Based on the different features of these algorithms, they are divided into three types: traditional path planning algorithm, neural-work-based algorithm, and sampling-based algorithm. Based on the new papers in these years, detailed introduction of the algorithms and their variants will be given. At the end of the paper, the thesis is summarized and the future research trend is prospected.
Image stitching is the process of combining numerous photos to make a panorama. The technique of image stitching has rapidly advanced and grown to be a significant area of digital image processing. Many image stitching methods have been proposed and studied in prior study. In this paper, the image stitching process is implemented using different algorithms. For keypoints detection, the algorithms of Harris corner detection, SIFT(Scale-Invariant Feature Transform), SURF(Speeded Up Robust Feature) and ORB(Oriented FAST and Rotated BRIEF) algorithms are applied, then use different methods (i.e., Brute Force, etc.) for feature matching. The RANSAC(Random Sample Consensus) method is used to calculate a homography matrix from matched feature vectors and use it to warp the images. Image blending and cropping methods are proposed to enhance the image quality. Given groups of the self-captured images, experiments have been down to shown the performance of different techniques.
Channel coding plays a crucial role in enhancing the reliability and efficiency of communication systems, particularly when transmission channels are disrupted by noise and interference. This paper presents an in-depth review of various channel coding techniques, their applications, and future research directions. Key topics discussed include prevalent channel coding methods, such as repetition codes, convolutional codes, LDPC codes, turbo codes, and polar codes. The paper also delves into the selection of suitable channel coding parameters and their applications in digital TV, mobile and satellite communications, unmanned aerial vehicle data links, speech communication, and underwater acoustic channels. Moreover, the paper explores the performance analysis and comparison of different channel coding techniques, shedding light on their strengths and weaknesses. Lastly, the paper identifies emerging trends and challenges in channel coding research, providing valuable insights for researchers and practitioners in the field of communication systems. By examining these techniques and future directions, this comprehensive overview aims to contribute to the development of more robust and efficient channel coding schemes for a wide range of communication applications.
With the continuous improvement of the manufacturing process of mobile phone cameras, the demand for panoramic photos by the industry and the public is not limited to ordinary cylindrical panoramic images, so a series of accurate panoramic image stitching technologies have been derived. However, these techniques often fail to take into account the stitching effect and speed, and there is still a technical gap in current multi-image panoramic stitching techniques. This article mainly compares the accuracy and efficiency of various feature extraction algorithms to select the most suitable algorithm for obtaining feature point pairs and estimating the homography matrix. Finally, a dual band hybrid algorithm is used to distort and fuse multiple images into panoramic photos. The results show that the panoramic images generated with multiple images using the algorithm provided in this paper are very good, even though there is a little bit of light and dark interlacing and ghosting.
Digital currencies have become an increasingly popular topic of discussion in recent years. Digital currencies are virtual forms of currency that operate outside the traditional banking system. They are based on cryptographic technologies and are often decentralized, meaning they are not controlled by a central authority. The most well-known digital currency is Bitcoin, but there are many other types of digital currencies in existence. Digital currencies can be used to purchase goods and services online or transferred between users directly without intermediaries like banks. They have gained popularity due to their potential for increased security, transparency, and efficiency in financial transactions. In today's digital currency, a variety of digital currencies emerge in an endless stream, and crypto technology is also constantly developing to improve the security of digital currency payments. In section 2, this paper briefly introduces several common digital currencies and encryption algorithms, and in section 3, this paper introduces these typical digital currencies in detail through the analysis of representative literature. Bitcoin is mainly encrypted based on blockchain technology, and its encryption principle is mainly divided into three parts: public key encryption, hash function, and proof of work. Ethereum is a distributed blockchain platform with encryption principles similar to Bitcoin, including public key encryption and hashing algorithms. Ripple is a distributed cryptocurrency. Its encryption principle mainly adopts the public-private key encryption system. In terms of encryption technology, blockchain technology, the Hash algorithm and symmetric and asymmetric encryption are also popular encryption algorithms in digital currencies.
With the rapid development of multimedia technology and the constant upgrading of film and television libraries, users' demand for movies and television is increasing. How to accurately and timely find favorite movies from massive movie and television resources according to user's preferences and needs has become a great challenge. In recent years, the recommendation of movies and TVs has attracted a lot of research interest from academia and industry. The existing recommendation algorithms mainly include content based and collaborative filtering. The former recommends projects through collaborative learning of others' interests, while the content-based method examines the rich context of the project. In this paper, to further improve the performance of recommendations, a content based collaborative filtering method is proposed to provide recommendations for movies and television. Specifically, we extract and vectorize feature and category information from movies based on TF-IDF and apply truncated SVD to reduce the dimensions of the rating and TF-IDF matrix to retain the most representative information. We calculate the cosine similarity between the vectors from these two matrices. The final recommendation is to list 10 movies based on the average similarity of content and ratings. Extensive experiments on Amazon review data have proven the effectiveness of this method.
Recommender system is a system that uses artificial intelligence and data mining technology to recommend items or services that meet users' preferences based on their historical behaviors and interests. In modern society, people are faced with more and more choices, and the emergence of recommender system can help users filter out valuable content from complex information and improve user satisfaction and experience. In this paper, collaborative filtering is used to implement a recommender system based on Amazon review data set. Meanwhile, Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) are used to conduct dimensionality reduction and other operations on the data. Root Mean Squared Error (RMSE) A value expressed as a recommendation for accuracy. After the establishment of the recommender system and three precision analysis experiments, it achieves this by applying a selected filtering algorithm to the input supplied, which is frequently in the form of user reviews of products.
This paper provides an overview of the basic principles, types, recent works, and applications of quantum digital signatures. The security of traditional digital signature schemes is compromised by the rise of quantum computing, leading to a need for post-quantum cryptography. Quantum digital signatures, which rely on the principles of quantum mechanics, offer a potential solution to this problem. This paper aims to provide a comprehensive overview of quantum digital signatures and post-quantum digital signatures. The paper introduces the basic principles of quantum mechanics, then explains key distribution in quantum digital signatures. The paper then provides a detailed description of both quantum and post-quantum digital signatures, including their differences and applications. Finally, the paper summarizes the main findings in the field, highlights potential future directions, and discusses challenges that humans must address. In addition, the paper examines the widespread applications of quantum digital signatures and post-quantum digital signatures in various fields such as Bitcoin, smart city blockchain, and finance. Finally, the paper summarizes the key findings in the field, highlighting potential future directions and discussing challenges that humans must address. Overall, this paper aims to provide readers with a comprehensive understanding of quantum digital signatures and post-quantum digital signatures and their applications in various domains.
With the development of light technology, especially the maturation of WDM/DWDM technology, the demand for optical amplification of the S-band and S+ band (1450nm~1520nm) is increasing day by day, and the energy level structure of Tm3+ has energy level transitions to meet the requirements of S-band and S+ band amplification. Although thulium ion has a very complex energy level structure, TDFA is one of the most promising optical fiber amplifiers for S and S+ bands. At the same time, with the continuous development of computer technology and mathematical theory, the optimization algorithm has been rapidly developed and widely used in recent decades, based on genetic algorithm, simulated annealing algorithm, and other traditional optimization algorithms that have been proved to get good convergence speed and optimization results. In this paper, the thulium-doped fiber amplifier's gain is optimized by using a cat swarm intelligent optimization algorithm to obtain the maximum fiber length and doping concentration.
In order to optimize the possible problems and improvements in the existing color image restoration interpolation algorithms, we conduct research based on the existing bilinear interpolation method, cok algorithm and Hibbard-Laroche algorithm. Our method is to use our own comparison method to compare different types of images through three algorithms to find the advantages and disadvantages and to some extent combine the advantages of bilinear interpolation and Hibbard-Laroche algorithm to try to innovate a new algorithm to compare with the existing three algorithms. The results show that the existing three algorithms have their own advantages in different scenarios, and the new algorithm is superior to the existing algorithms in terms of clarity and color restoration accuracy in most scenarios. However, due to the large computational complexity, the operation speed is slow.
Currently, the diagnosis of pathological myopia is mostly done through manual diagnosis, which not only requires experienced ophthalmologists but is also time-consuming and labour-intensive. In order to improve the diagnostic efficiency and accuracy, and to prevent irreversible visual impairment caused by missed diagnosis, misdiagnosis, and delayed treatment, this paper presents a fine-grained image analysis task of classifying fundus images of patients with pathological myopia and non-pathological myopia. To accurately identify subtle differences in features among similar fundus images, a pathological myopia recognition model based on Vision Transformer (ViT) is proposed. The model incorporates a feature selection module using self-attention mechanism that can effectively select important features in the fundus images, thereby eliminating the influence of irrelevant regions on recognition. Experimental results demonstrate that this method outperforms traditional ViT models, achieving high accuracy in pathological myopia recognition.
Contact-less human-machine interaction is becoming increasingly important due to the growing number of special environmental needs and accessibility situations. Gesture recognition has also been a hot topic in computer vision and machine learning in recent years. In this paper, a real-time computer manipulation system based on hand gesture recognition is studied and deployed. A relatively mature end-to-end target recognition model, the YOLOv5 model, is trained in this paper to achieve real-time detection and recognition of hand gestures. According to the result of the recognition, it is translated into the corresponding operation on the computer according to a set of rules, and then PyAutoGUI is used to actually control the computer. At the end of the research, the trained YOLOv5 model exhibited excellent performance and verified the feasibility and scalability of the solution. This is a good inspiration for developing a more convenient and efficient related software.
As the amount of internet movie data grows rapidly, traditional movie recommendation systems face increasing challenges. They typically rely on statistical algorithms such as item-based or user-based collaborative filtering. However, these algorithms struggle to handle large-scale data and often fail to capture the complexity and contextual information of user behavior. Therefore, deep learning techniques have been widely applied to movie recommendation systems. This paper reviews movie recommendation algorithms based on traditional statistical models and introduces three main deep learning techniques: Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN). ANN can extract features at different levels of users and movies; CNN can capture features of movie posters and movie data to recommend similar movies; RNN can consider user historical behavior and contextual information to better understand user interests and demands. The application of these deep learning techniques can enhance the accuracy and user experience of movie recommendation systems. This paper also demonstrates the advantages and disadvantages of these models and their specific application methods in movie recommendation systems, and points out the direction for further development and improvement of deep learning models in this field.
The application of artificial intelligence has increasingly penetrated into the field of game development. Among them, the non-player characters in the game, namely NPC, are part of the applications of AI. The virtual characters in the game characters are collectively called NPC, which enhances the fidelity and complexity of the game by having contact with with the players. Using AI can make NPCs in the game more vivid, thereby increasing the playability of the game and creating more possibilities. This article will review the research and application of AI-based game NPCs in recent years.
The usage of feature detectors for image stitching has become a popular research area in computer vision. Various feature extraction algorithms can be used in the process of image stitching process, but they perform varyingly when handling different images and no single algorithm could outperform all others. This paper focuses on the comparison of feature extraction algorithms used for panoramic image stitching. The research utilizes the SIFT, ORB, AKAZE, and BRISK to conduct feature points and match feature points on a group of image sets. The RANSAC algorithm is then used to filter out the outliers and calculate the homograph matrix. Completes the panoramic with image splicing and smoothing through the matrix transformation. Derived from the comparison of the matching and stitching results, the AKAZE detector is found to be the fastest feature point detection and extraction algorithm, while the SIFT detector will provide more feature points to make more accurate matches possible. These findings have implications for the development of efficient and effective computer vision technologies for various applications.
For a long time, game was a relatively unrecognized area by academic community, which lacks detailed and sufficient discussion. But with the growth of game industry, game AI has become a heated topic in recent years. As an important and evolving application of AI, there is a need to better discuss the application and future improvement of game AI technologies. This paper introduced history and breakthrough of AI made in game area. And made discussion centered on current implementation of some popular approaches for game AI, followed by the possible future of these technologies. Some new implementations like procedural content generation were then covered to further discuss future implementations of AI in game area. All in all, hot spots and development prospects of this research topic were prospected to enlighten future development of game AI.