Proceedings of the 2023 International Conference on Software Engineering and Machine Learning
Anil Fernando, University of Strathclyde
Marwan Omar, Illinois Institute of Technology
The emergence of artificial intelligence (AI) technology has once again promoted the development of robots by making them more efficient, independent, and intelligent. With the progress of the times, the two promote each other and develop together in many fields, especially in the field of medical and health. This paper describes the strengths and limitations of surgical robots and intelligent medical robots as well as their possible development trend in the future from both hardware and software aspects. Conclusions can be drawn that, on the one hand, surgical robots can improve the safety and reliability of the surgery and intelligent medical robots can help achieve drug research and development, intelligent diagnosis and treatment, intelligent image recognition, etc. These can help doctors make diagnoses and treatments more accurately; on the other hand, it is difficult to install an electronic sensor on the mechanical arm at present, the current positioning technology is not perfect, and the rotation angle of the mechanical arm is limited. These limitations improve the difficulty for doctors to use surgical robots. However, with the continuous progress of robot and AI technology, the limitations of many surgical robots can be improved and more targeted AI technology can be developed and applied in specific fields.
There are several different algorithms to train models in machine learning and for purpose of enhancing the accuracy of the models for emotion recognition systems by using EEG datasets, we can compare different algorithms and methods. To experience more intuitively the different impacts caused by different algorithms and methods, ‘Emotion classification by applying ‘Convolutional Neural Networks, Sparse Auto-encoder, Deep Neural Network’, and ‘ZTW-based epoch selection algorithm’, these three models of emotion recognition will be mentioned and compared in this essay to find out a way to improve the accuracy of the emotion recognition models based on EEG datasets, which also include the process and final results of these models.
Brain tumor is a serious disease for human beings. MRI is the most widely used method for its innocuousness since people do not need to be exposure to radioactivity. The segmentation on MRI images is a vital step in tumor detection. To improve efficiency and accuracy of the segmentation, scientists apply different algorithms in this process. This paper focuses on three particular algorithms including Connected component label algorithm (CCLA), Watershed algorithm (WSA) and Fuzzy C-means clustering algorithm (FCCA). The principles and applied procedures of these three algorithms are introduced. Basing on this background information, algorithms are compared from three aspects. Fuzzy C-means clustering algorithm is considered as more efficient and accurate among these three algorithms. All algorithms have good research prospects, and the segmentation result can be improved through the improvement on algorithms.
In real life, there is far more unprocessed data than labeled data, which brings a large amount of data that cannot be directly used for machine learning training. Based on the tweet dataset processed by Natural Language Processing (NLP), this paper uses a variety of machine learning models for training and comparison. Moreover, different performances are analyzed and discussed. Since labeled datasets are difficult to obtain, the use of supervised learning will be limited. However, the number of unlabeled datasets is very large, which can provide a continuous training set for machine learning. This paper conducted a comparative experiment on the effect of semi-supervised learning and obtained better results than supervised learning and unsupervised learning. The experiments in this paper prove that semi-supervised learning can effectively use unlabeled data and train machine learning models.
Recently, machine learning(ML) has become a hot issue, and most brain-computer interface(BCI) systems contains machine learning structured classifiers. The classifiers are designed to process the selected feature and send the most possible signal user generates to the reception device. However, it’s not easy to realize without some advanced ML methods. This survey mainly focuses on some effective ML means to classify features, including state-of-the- art invention: MDM classifier. Moreover, we propose some promising directions to further research this topic.
Medical images are commonly used today by medical practitioners for the purposes of diagnosis. For the purposes of diagnosis, diagnostic images are widely used today by medical professionals. In general, MRI works on soft tissues, and CT works on hard tissues. Due to device and hardware limitations, mathematical calculations, transition mechanisms in computers, there are chances of creating noise in medical images.In yhis paper, a critical review on CT image denoising has been performed in wavelet domain. In the transform domain, the process of removing noise from an image starts with the image or data being divided up into a representation in scale space. It has methods for setting thresholds, rules for shrinking, and a way to clean up noise based on wavelets, among other things.
This paper deals with reviews of the various publishing methods available. Due to emerging published technologies like electronic paper, newspaper publishers must now produce, print, and disseminate their information in new ways (e-paper). In addition to creating technical challenges for publishers, e-paper will change how people consume news. We address the impacts of this new channel in this article from the outside-in by assuming a starting point in prospective new consumption. We examine the impacts that the e-paper publication channel will have on distribution, media house systems including editorial, advertisement, and subscription, and procedures based on three empirically validated future consuming scenarios.
The paper introduces a monitoring system for the health structure of rail-tracks. The slab track system is highly demanded due to its quality for safety measures and highly sustainable quality for high-speed railway infrastructure, particularly in India for the Bullet-train project. Previously, the system used to monitor the health of slab-tracks was costly and not done regularly, but the evolution of digitalization and wireless sensor networks is doing tremendous work for monitoring the health of infrastructure and other activities. In rail-track systems, wireless sensors can provide us with information, detection, and prediction of the health-infrastructure of rail-tracks. An efficient design for communication systems is needed for such safety-critical railway tracks. The paper proposes an accurate and efficient design for communication.
With the large-scale use of EV around the world, it has become a popular research direction to integrate EV into the power grid to achieve the purpose of peaking and valley filling for the power grid. With V2G technology as the core, this paper firstly analyzes the V2G grid-connected control technology, namely active power and reactive power control and clustering control at the theoretical level. From the perspective of cooperative game, it elaborates the commercial operation mode of V2G after grid-connected. Finally it puts forward relevant assumptions on the future development trend of V2G based on the current development status of V2G technology.
This research propose a user-centered combinatorial data anonymization method. whereas a data matrix is said to be k-anonymous if each row occurs at least k times. Therefore, the authors propose PATTERN-GUIDED k-ANONYMITY, an improved k-anonymization problem. It allows users to designate the combinations in which suppressions may occur, building on prior work and addressing relevant shortcomings. Users of anonymous data can indicate that the aspects of the data are valued differently. The so-called K-anonymity is usually realized by Generalization and Suppression techniques. Generalization refers to Generalization and abstraction of data so that specific values cannot be distinguished, for example, the age data group can be generalized into an age group.
With the rapid development of high-tech in China, the field of artificial intelligence has also been promoted. Image recognition is an important topic in the field of artificial intelligence, which mainly includes two modules: classification recognition and feature extraction; At the same time, as an important research direction of AI, deep learning has made rapid progress in recent years. It has been widely used in image recognition, speech recognition and other fields and has achieved great success. This paper analyzes the application of deep learning in image recognition, mainly from the aspects of face recognition, remote sensing image classification, etc. Its purpose is to provide help for relevant practitioners, so as to promote the development of image recognition in the tide of AI development.
Security knowledge is one of the foremost challenges in the present day. When the topic is about Information security, the concept of cryptography comes into the picture. Every day, people and organizations use cryptography to maintain the confidentiality of their communications and data as well as to preserve their privacy. Today, one of the most successful methods used by businesses to protect their storage systems, whether at rest or in transit, is cryptography. Yet, cryptography is an effective technique to secure the data, the modern technology can break the cryptographic techniques. But some data encryption algorithms are several times stronger than today's conventional cryptography and can be constructed using quantum computing. They are "Quantum Cryptographic Algorithms ". Quantum cryptography uses the rules of quantum physics instead of classical encryption, which is based on mathematics, to protect and transmit data in a way that cannot be intercepted. Quantum key distribution is the greatest illustration of quantum cryptography and offers a safe solution to the key exchange issue. The proposed work deals with quantum cryptography and mainly focuses on how the quantum cryptographic algorithm is more secure than traditional cryptography.
With the development of the times, investors are increasingly in demand for stock price forecasting. However, stock price fluctuations are full of uncertainty, making traditional machine learning algorithms more erroneous in long-term forecasting. Based on the LSTM model, this paper uses Tushare to obtain the historical price of stocks, and the optimal structure and best training parameters of the LSTM model in stock price prediction are determined experimentally. The prediction accuracy of the LSTM model was evaluated by MAE, and the best result was 69.15, which achieved accurate prediction of stock prices. Compared with the traditional SVR model and the ARMA model, the prediction results of LSTM are more in line with the actual value, and the prediction accuracy of the algorithm is higher.
Yoga is a year’s discipline that calls for physical postures, mental focus, and deep breathing. Yoga practice can enhance stamina, power, serenity, flexibility, and well‐being. Yoga is currently a well-liked type of exercise worldwide. The foundation of yoga is good posture. Even though yoga offers many health advantages, poor posture can lead to issues including muscle sprains and pains. People have become more interested in working online than in person during the last few years. People who are accustomed to internet life and find it difficult to find the time to visit yoga studios benefit from our strategy. Using the web cameras in our system, the model categorizes the yoga poses, and the image is used as input. However, the media pipe library first skeletonizes that image. Utilizing a variety of deep learning models, the input obtained from the yoga postures is improved to improve the asana. On non-skeleton photos, VGG16, InceptionV3, NASNetMobile, YogaConvo2d, and also InceptionResNetV2 came in the order of highest validation accuracy. The proposed model YogaConvo2d with skeletal pictures, which is followed by VGG16, reports validation accuracy in contrast, NASNetMobile, InceptionV3, and InceptionResNetV2.
Deep learning has recently subverted algorithm design ideas in one and one fields, such as speech recognition, image classification, and text notification, by forming a model that begins with training data, goes through an end-to-end model, and then a new mode for direct output to obtain the final result. This not only makes everything easier, but because each layer in deep learning can tune itself for the final task, eventually cooperating between the layers, it can greatly improve task accuracy. This paper primarily introduces what deep learning is, the deep learning principle, and the application of deep learning in people's lives. This paper analyzes and clarifies the future development direction of deep learning and its practical application in many different fields, so as to offer some references for future studies.
An exoplanet is a planet that orbits a star outside of our solar system. The study of exoplanets is an active area of research in astronomy. In this research, we aim to utilize the Kepler dataset provided by NASA EXOPLANET ACRCHIEVE to identify and classify exoplanets that could potentially support life. The Kepler dataset, which comprises of observations of over 150,000 stars, has been instrumental in the discovery of thousands of exoplanets. We will analyse the dataset using machine learning techniques to classify exoplanets as potentially habitable based on their orbital period, size, distance from their host star, and other parameters. The findings of this research will greatly enhance our understanding of the frequency of life in the universe and the use of machine learning techniques on the Kepler dataset will be an essential tool in the quest for finding potentially habitable exoplanets. Emerging Machine Learning Algorithms aid in detecting habitability of exoplanet in different stellar system. For finding an Exoplanet we have used the “transit method” which is based on the principle that when an exoplanet passes in front of its host star, it causes a temporary dip in the star's brightness. By monitoring the brightness of a star over time, scientists can detect these periodic dips and use them to infer the presence of an exoplanet. The findings of this research have the potential to significantly advance our understanding of the prevalence of life in the universe.
At present, the application of BIM is in the ascendant, and 5G, the Internet of Things, and big data are also rising. AI is gradually influencing and innovating the construction field, and releasing the productivity of the industry from BIM, Internet of Things, big data and other aspects. The paper starts from the current situation of the construction industry, focusing on the integration of building construction, and tells how AI helps design in the early stage of architectural design to intelligent construction assistance in the later stage of the building. The use of AI can bring about the liberation of labor, improve the efficiency of the building from the beginning of the design to the final construction of the process and other advantages, and there are shortcomings such as excessive costs. In the future, AI applications may develop further when the time benefits can outweigh the cost benefits.
As the demand for air traffic has grown at a fast pace in recent years, the efficiency and safety of air traffic management is facing greater challenges with limited airspace resources. As an important part of the civil aviation air traffic system, the existing air traffic management capability is no longer able to meet the demand of air traffic growth. Machine learning, as an advanced method of current computer modelling, has shown good application value and promise for application in air traffic management. This paper starts by introducing the application methods and modelling process for machine learning in air traffic management, followed by the current research status of machine learning in three areas, namely air traffic flow management, air traffic services and airspace management, respectively, and finally points out the challenges and further development outlook of applying machine learning to air traffic management. Overall, the introduction of machine learning into air traffic management represents a major trend with significant implications for its development.
With the development of the times, computers are used more and more widely, and the research and development of adder, as the most basic operation unit, determine the development of the computer field. This paper analyzes the principle of one-bit adder and floating-point adder by literature analysis. One-bit adder is the most basic type of traditional adder, besides bit-by-bit adder, overrun adder and so on. The purpose of this paper is to understand the basic principle of adder, among them, IEEE-754 binary floating point operation is very important. So that the traditional fixed-point adder is the basis of the floating-point adder, which can have a new direction in the future development of floating-point adder optimization. This paper finds that the floating-point adder is one of the most widely used components in signal processing systems today, and therefore, the improvement of the floating-point adder is necessary.
In general, doctors determine the presence of heart disease through clinical evaluation and pathological data, and the diagnosis process is complex and inefficient. Based on the above situation, professionals are committed to researching efficient and accurate methods for predicting heart disease. After studying many literatures, this paper found that the existing heart disease prediction system has high requirements for clinical data. Based on the reality of the shortage of medical resources under the COVID-19 epidemic, this paper develops a simple heart disease prediction system, which predicts heart disease through simple and easy-to-measure data of patients, and then prevents heart disease. The method consists of two steps. First, collect the characteristics related to heart disease, and then select the most important 10 characteristics through correlation analysis and literature research, namely gender, age range, body mass index (BMI), smoking status, physical health index, walking difficulty, stroke status, skin cancer, diabetes, kidney disease. Second, an algorithm for heart disease based on artificial neural networks classification based on these features is developed. The prediction accuracy is close to 92%. In the future, the proposed model could be leveraged for heart disease recognition.