Proceedings of the 5th International Conference on Computing and Data Science
Roman Bauer, University of Surrey
Marwan Omar, Illinois Institute of Technology
Alan Wang, University of Auckland
Brain medical imaging is a main diagnosis method for Alzheimer’s disease (AD). But the method relies on the physician’s manual analysis which is subjective and time consuming. In recent years, artificial intelligence (AI) technology has been widely applied in clinical diagnosis. This thesis is about the deep learning model to be designed to realize the computer-aided diagnosis of medical images. A model of densely connected network (DenseNet) as an AI technology, automatically learns the semantic features related to AD diagnosis on the brain MRI images from ADNI data. At the same time, for solving the limited medical image samples problem, the effective transfer learning technology was applied in the experiment. The final model result achieves 90.8% accuracy, 82.2% sensitivity and 96.1% specificity on the diagnostic task of AD, and the diagnostic accuracy is better than prevailing methods. Besides 80.4% accuracy, 52.2% sensitivity, and 84.8% specificity are achieved in the task of distinguishing progressive from stable MCI patients. This method can provide more accurate diagnosis results of Alzheimer’s disease expected for the clinical early auxiliary diagnosis.
The wide spread of spam has brought a lot of inconvenience and trouble to people’s work and lives. Therefore, it is of great practical significance to constantly update the methods of spam classification and filtering to improve the current situation of email use. In this paper, linear regression and logistic regression are examined to test whether a random email is spam or a normal email. The logistic regression model is based on a public data set that is estimated by calculating the number of entries in the entire set and then the probability of spam. The linear regression model is based on the data from the logistic regression model and is estimated to give a line representing the probability of spam in a given range of emails. Finally, the results of these two models clearly indicate the rampant and widespread nature of spam, which can enhance the public’s overall awareness of carefully examining unknown emails.
There is no denying the fact that Artificial intelligence technology has developed rapidly in recent years. In medicine, especially in brain tumor detection field, it has attracted great attention. By combining artificial intelligence with physiological imaging, the classification of brain tumors and the selection of the best treatment options can become more accurate and precise. AI brain tumor detection can reduce the rate of misdiagnosis and improve the speed of diagnosis. The article researches the method of AI in brain tumor detection. The process of the general method can be divided into four phases: Data collection, Preprocessing, Feature extraction, and Classification. At the same time, this article also analyzes the application of AI in brain tumor detection and treatment, including the advantages and challenges of AI application, development direction, and conclusions. As a review paper, this paper provides a relatively complete overview of this field. Even though the growing presence of AI technology in the brain tumor medical field is already bringing greater assistance, there is still a lot of room to improve. Indeed, a highly accurate, explainable system is needed in the future. With the rapid development of AI methods, so do the corresponding high-performance hardware become increasingly essential.
In recent years, artificial intelligence (AI) has witnessed significant advancements in the field of education, with its ability to personalize and adapt content to individual student needs. In parallel, virtual reality (VR) has emerged as a powerful tutorial tool, providing immersive and interactive experiential learning experiences which has the advantages of improving students' motivation and engagement. Previous researchers have demonstrated the potential of ML algorithms, particularly RL, for generating educational content and VR environments. To create high-quality content, researchers have started exploring the integration of Machine Learning (ML) and Reinforcement Learning (RL) algorithms into Procedural Content Generation (PCG) methods for automatically generating both textual and non-textual content such as practice questions, quizzes, VR learning environments, etc., which have the potential to increase the efficiency and effectiveness of educational interventions. Nonetheless, the development of these techniques requires addressing several challenges. Significant advancements are yet to be made in developing and refining these algorithms to produce high-quality and effective educational content for VR applications. This article provides a comprehensive overview of the current state of research in reinforcement AI learning content generation for VR educational applications. For each area, it discusses the state-of-the-art techniques, applications, limitations, and challenges faced in development, covering the use of natural language processing, reinforcement learning, and machine learning algorithms. The review concludes by highlighting some of the key opportunities for future research in this field, including the development of more sophisticated models and the exploration of new applications of machine learning in educational technology.
This paper aims to offer a comprehensive review of the current state of the art in artificial intelligence (AI) as applied to license plate recognition. With the rapidly evolving nature of AI technology, deep learning approaches have gained popularity in license plate recognition, as exemplified by the success of AlphaGo. The diversity of AI in license plate recognition is notable, with numerous studies proposing systems that have achieved high accuracy in segmentation and recognition. The process of reading license plates is complex and involves several stages, including image capture, pre-processing, license plate identification, character segmentation, and recognition. Law enforcement widely uses automatic license plate recognition (ALPR) technology for detecting and preventing criminal activities, tracking stolen vehicles, and identifying suspects. Additionally, ALPR technology can monitor travel time on significant roadways, which can provide the Department of Transportation with useful data for efficient traffic management. Overall, this paper highlights the importance of AI in license plate recognition and its potential to revolutionize the field.
This paper provides an overview of artificial intelligence (AI) and speech recognition technology, including its history, applications, challenges, and future prospects. AI-powered speech recognition technology has significantly improved over the years, and it is used in various applications, such as virtual assistants, voice-activated devices, and dictation software. The technology leverages machine learning algorithms that are trained on vast amounts of speech data to recognize and interpret human speech with accuracy levels that are comparable to those of humans. However, the technology still faces many challenges, such as speech variability and background noise, which make it challenging to develop speech recognition algorithms that can accurately recognize all types of speech. The article provides a comprehensive review of the technical aspects of automatic speech recognition, including the process involved, the algorithms used, and the challenges and opportunities for future research in this area. The paper also discusses the architecture of automatic speech recognition (ASR) systems and the main components that make up the system. The authors explain that ASR systems consist of three main components: the acoustic model, the language model, and the decoder. They also discuss the challenges that ASR systems face, such as speaker variability, noise, and limited vocabulary. Overall, this paper provides a detailed introduction to AI and speech recognition technology and its potential for various industries.
The field of artificial intelligence and deep learning has made significant progress in recent years, with the development of innovative techniques and technologies that have the potential to improve various aspects of human life. However, the elderly population faces significant challenges when it comes to utilizing electronic devices, as the demand for convenient and easy-to-use products in their daily lives remains unmet. To address this challenge, this paper proposes the design of a software solution, consisting of four key components. Firstly, the software application is simulated, and the process of software operation when the elderly use WeChat is modeled. Secondly, automatic speech recognition technology is utilized to enhance the usability of WeChat for the elderly. Thirdly, a dataset is created for data collection, processing, and analysis, with the aim of constantly improving the software. Finally, evaluation methods such as formative and summative evaluations are utilized to assess and enhance the effectiveness of the software. The proposed software solution has the potential to significantly improve the quality of life of the elderly population by enabling them to better access and utilize electronic devices. Moreover, the incorporation of automatic speech recognition technology and data analysis for continuous improvement has the potential to contribute to the advancement of the field of artificial intelligence and deep learning. Further research should focus on refining the software solution to better cater to the specific needs of the elderly population.
In 1987, the first video game was published. People entered a new world to explore gaming. After ten years, browser games were invented. In order to deeply discuss browser games’ future, this article summarizes the features and current situation of browser games. This article analyzes the strengthens of browser games in three aspects, which are convenient setup for players to start the game, social interaction for players to interact with other people, and free-to-play experience provided for players. The analysis of browser games’ weaknesses is included in this article as well, which were the potential factors that lead browser games to current situation. The weaknesses are browser games’ low quality of display and the dependencies of the internet. Also, this article analyzes the reasons why the revenue of browser games reduced, which are related to the presence of mobile games, the reduction of funding, and the stopped service of Adobe Flash player. In conclusion, the full article and the future development of browser games were concluded.
The combinatorial game theory is not only a new branch of modern mathematics but also an important subject of operations research. One basic combinatorial game can develop many different rules and different strategies to solve the problems, and both the rules and strategies follow the fundamental game theory to give the answer for the new version of the combinatorial game, such as the Nim game, one of the most basic combinatorial games, which has been researched for many years by a large number of researchers. This paper shows and extends the strategies of two combinatorial games, Tic-Tac-Toe and Green Hackenbush, based on the regular version which is connected with the combinatorial game theory.
Optical character recognition is the combination of optical technology and computer technology to identify text in an image and then recognize the text content in the image, providing individuals with a great deal of ease in their daily lives. Document text recognition, natural scene text recognition, bill text recognition, and ID card recognition have been used in daily life, but there are still many factors that lead to inaccurate identification and detection. Therefore, different texts, patterns or characters are suitable for different types of Optical character recognition. In this paper, we can learn about the Optical character recognition operation methods and find the similarities and differences through researching the technical routes and four different types of Optical character recognition. In addition, by comparing the Optical character recognition of several commonly used languages, the advantages and disadvantages of each method can be analysed.
Air pollution condition is getting worse with the advancement of society development, environmental pollution has gradually intensified, and smog frequently occurs in more and more cities. On foggy day, the saturation and contrast of an image could be low, and colors tend to drift and distortion. As a result, seeking a simple and effective image de-fogging technique is important for the subsequent research. In this study, three existing classical de-fogging algorithms are reproduced: histogram equalization, dark channel prior method, and convolutional neural network. The three de-fogging algorithms were compared respectively under the conditions of thin fog, thick fog, high brightness, and low brightness, so as to analyze their advantages and disadvantages. It is concluded that there is no obvious difference among the three algorithms in the de-fogging effect under the conditions of thick fog and high brightness, but relatively speaking, the de-fogging image generated by the dark channel prior is more real. When the fog is thin, the dark channel prior and convolutional neural network work better. Under the condition of low brightness, the histogram equalization has a better de-fogging effect.
Fine-grained visual classification is regarded as a more refined classification that can identify specific types of objects. It has been widely used in commodity sales, vehicle recognition and person recognition, etc., and has played a great value in many fields. For example, it requires an algorithm to identify different species of birds or dogs to facilitate more practical applications. This task is difficult since the objects has similar appearance and there exist obvious intra-class variance and limited inter-class differences, where different kinds of birds could share very similar appearance. Deep learning techniques has been applied to image recognition, natural language processing, and many other fields. Several approaches tackling the fine-grained classification problem are proposed. To further demonstrate the different designs of these solutions, in this paper, fine-grained identification methods are compared and analyzed, among which WS-DAN achieves better results and it is preferred to be an effective method, which is expected to be more widely used in this field.
During the acquisition or transmission process, video images are subject to random signal interference and generate noise, which can hinder people's understanding of the image and subsequent processing work. Therefore, it is necessary to study video image denoising and filtering algorithms. Bilateral filter is one of many typical video image filtering algorithms. However, the traditional bilateral filter algorithm does not consider the differences in the contents of different regions of the image. It is difficult to obtain the optimal filtering effect by using a fixed filtering weight to filter the entire image, which leads to problems such as blurred image edges and inadequate details processing. This paper studies the influence of different filter block sizes on the bilateral filter effect, and proposes an algorithm to adaptively update the bilateral filter weight according to the block's variance. The experimental result shows that the performance of adaptive bilateral filter with different block sizes is expected to be better than that of traditional algorithms with fixed filter weights.
Gastrointestinal diseases are one of the common clinical diseases, which often require medical imaging for diagnosis and treatment. Recently, the development of deep learning technology has promoted the development of medical image recognition, which provides new ideas and methods for the automatic recognition and analysis of medical images. VGGNet19 is a convolutional neural network model that has attracted much attention because of its simple structure, easy training and better recognition effect. For this reason, this paper proposes an improved VGGNet19 model for medical image recognition of gastrointestinal diseases. Specifically, the project adds an additional fully connected layer and Dropout layer on top of the built VGGNet19 to achieve the recognition of medical images of stomach diseases. Extensive experiments on standard medical stomach images show that the proposed method improves the recognition performance to a certain extent.
The floating garbage is becoming more and more serious, but little research has addressed recognition of these floating garbage. Intelligent target recognition of floating garbage using deep learning techniques is therefore essential. The YOLOv7 algorithm has strong ability of extracting target features and is significantly faster than its previous version at the same accuracy. The technique is provide based on the YOLOv7 algorithm for identifying floating garbage in this paper, as a result, develop and implement a target monitoring function for floating garbage identification. Specifically, the combination of YOLO v7 and SE attention mechanism was used to improve ability of target sensing. The training process was optimized by using EIOU loss, resulting in a significant improvement in the efficiency of the final model compared to the normal YOLOV7 algorithm model, significant improvements of 20% and 25% in the model metrics mAP_0.5:0.95 and mAP_0.5, respectively.
The purpose of this paper is to analyze and organize the research in the field of Blockchain Internet of Things (BIoT) using knowledge mapping technology, in order to get a more comprehensive understanding of the research status and development trend in this field. By organizing and statistically analyzing the relevant literature in the field of blockchain IoT, information on keywords, topics, and hot issues of research in this field is obtained and presented in a visual way. At the same time, this paper also provides an in-depth analysis of the research in the field of blockchain IoT, including technical applications, development trends and other aspects. The research results show that the research in the field of blockchain IoT shows a rapid development trend and has received wide attention from academia and industry sectors. In terms of technical applications, the combination of blockchain technology and IoT technology has brought new ideas and technical means for the development of BIoT field. Meanwhile, with the application of knowledge graph technology, the research in the BIoT field has gradually developed in a more refined and in-depth direction. This study has certain reference value and practical significance for the research in the field of blockchain IoT, and can provide more scientific and accurate guidance for the development and application of this field.
A growing number of people worldwide suffer from chronic kidney disease, with many individuals in developing countries lacking the necessary resources for treatment. Medical records often contain valuable information that can be utilized to predict the development of CKD, with machine learning algorithms proving particularly effective. In this study, the author analyzes a dataset of 250 participants with CKD and 150 participants without from 2015, utilizing various machine learning classifiers to determine the most significant characteristics and predict CKD development. The analysis reveals that serum creatinine, specific gravity, red blood cell count, and potassium are the four most relevant risk factors for CKD prediction. Based on these four factors, the author builds machine-learning models that can accurately predict CKD development from medical records. The results show that a combination of all the features in the original dataset achieves a similar level of accuracy as the four-feature models. This research has significant implications for clinical practice, providing doctors with a new tool to predict CKD in patients. By focusing on the most relevant features, such as serum creatinine, red blood cell count, specific gravity and potassium, physicians can make more informed decisions when treating patients with CKD.
This article set up an electronic referee system based on monocular vision recognition. The system requires only two cameras to operate. By establishing a multi-camera real-time video monitoring system, the system cooperates with artificial intelligence automatic recognition judgment, overcomes many blind spots of human eye judgment, completes the judgment efficiently and accurately, improves the accuracy of the referee's ruling on basketball interference ball violation, and enhances the fairness and spectacle of the game. This project overcomes many blind spots of human eye judgement, efficiently and accurately completes the penalty, improves the accuracy of the referee's ruling on basketball interference ball violation, and enhances the fairness and spectacle of the game by establishing a multi-camera real-time video monitoring system with artificial intelligence automatic identification judgement.
Cancer has become the number one killer of human life and health. Therefore, a model that can predict cancer is able to help doctors to diagnose whether a patient has cancer or not, which can boost the accuracy of the diagnosis and enhance diagnostic efficiency, thus reducing the chance of misdiagnosis and other situations. This paper focuses on breast cancer prediction and adopted three machine learning based methods, including logistic regression, K-Nearest Neighbor, and decision tree models to build automatic solutions and investigate which model is more suitable for such a simple prediction problem. In this study, the detailed features, data collection and pre-processing approaches are presented to better understand such medical data. Then extensive experiments show that the accuracy scores of the three models are 97.08%, 94.89%, and 93.43%, respectively. Through comparison, it is concluded that the logistic regression model achieves the best performance for the breast cancer prediction task.
In recent years, there has been a continuing search for reliable instruments that can predict trends in financial markets and activities related to investments. In the past, academics have used traditional methods to forecast the investment worth of equities by analyzing metrics such as the financial records of companies from both a fundamental and technical point of view. The effectiveness of these strategies could decrease as market information asymmetry continues to rise and high-frequency trading becomes increasingly prevalent. Researchers have developed novel methodologies as a result of the progress that has been made in the field of artificial intelligence technology. One of these methodologies is the application of neural networks for forecasting. In the meantime, data visualization is becoming increasingly common, which could make it easier to conduct an in-depth analysis of the advantages and disadvantages presented by various models. The purpose of this research is to evaluate the performance of machine learning and deep learning strategies, including logistic regression, support vector machine, multi-layer perceptron and convolution neural networks, in forecasting stock market prices where various data visualization techniques are utilized for investigation. The findings from error analysis demonstrate that convolutional neural networks operate superbly.