Proceedings of the 5th International Conference on Computing and Data Science
Roman Bauer, University of Surrey
Marwan Omar, Illinois Institute of Technology
Alan Wang, University of Auckland
The wireless system did the huge cost-cutting in monitoring the structure so, now it can be used permanently as an integral part of the system as a smart infrastructure that will give them real-time information the structure. The wireless devices transmit the collected data about cracks, displacement, and excess vibration in slab-tracks. The train which will collect the data and train will be used as a data mule in this paper which will upload the information to a remote-control centre. The data which will be collected stored in the database and to know the status of the track a query will be fired from an application. In this paper, many design for communication systems are proposed which are efficient, with fine accuracy, and most importantly it is a low-cost system.
The Unmanned Aerial Vehicle (UAV) environmental monitoring system in an area is multipurpose and easy to use. With the use of targeted mobile networks and drone flight control, the relevant staff can effectively monitor regional temperature and humidity. This study draws on the author's previous expertise to outline the key points of a programme demonstration and system design, as well as to examine the software flow architecture of regional environmental monitoring. As a result of this work, users will be able to easily collect weather as well as other data by using a drone that can be controlled from an Android phone and flown to some faraway, inaccessible site. The mobile node MCU may receive commands from the WiFi module and relay them to the drone. The drone will be flown when the instructions have been sent via the IoT based NodeMCU module. From there, it will gather data via the network and communicate that information to the cloud database over WiFi. This setup has the potential to be implemented into a model for a weather prediction that can accurately foretell winds and temperatures in the immediate vicinity of the surface to an accuracy of 100 meters.
Tamil Nadu reportedly is the alternate loftiest rates of pupil self-murder in India. According to report from the National Crime Records Bureau (NCRB) for 2020, about 46 people failed each day by self-murder in the state; further than two of these victims each day were scholars. The government of India’s domestic seminaries for meritorious pastoral children have reportedly witnessed 49 self-murders in just five times and half of the children who killed themselves on the premises were Dalits and Adivasis, Educationalists point at the pressure mounted on scholars for marks and competitive examinations as one of the main reasons. Utmost seminaries have councillors but going to the council has a smirch about it still. Domestic seminaries where these cases are passing more need to be covered more. Children dying by self-murder due to academic pressure are unheard of in other countries. Utmost of the A huge number of enquiries revealed depression and anxiety to be implicit risk factors for self-murder. Heart health is good among students between the ages of 15 and 25. A few physiological parameters are altered by anxiety and depression. Hence, current pulsation is the symptom for psychiatric diseases. Clinical trials have demonstrated that mental reasons, most frequently anxiety, depression, and somatoform illnesses, are present in cases with pulsation. Psychiatric problems like anxiety and somatization are linked to palpitation. A study found that individuals with high levels of both depression and anxiety are 54.77 times more likely to commit suicide, which is a significant increase above individuals with either high levels of anxiety or depression (2.46). (26.32).. Naturally students studying +2 and preparing for high marks are in under depression and it's dragged for two times. When the examinations are nearing and the results are nearing, the combined depression and anxiety factor increases the threat for committing self-murders for84.5. Hence, a system with literacy and decision making medium is needed to constantly monitor the students to take prompt conduct. A wearable IoT device transmits with scholars Id, the pulsation position constantly to the fellow knot. Logistic Retrogression machine literacy algorithm is used for decision making grounded on the probability of circumstance. The authorities are informed in their widgets and prompt conduct can be taken. This system will be more applicable to the boarding seminaries, where the children are far down from their parents and the number of self-murders are high particularly among +2 grade. This system can be used for colourful communities, where the working culture gives depression and anxiety.
The purpose of licence plate recognition is to analyze pictures or videos of moving vehicles to read the plate and identify the vehicle's owner. Traffic data management and smart transportation systems rely heavily on licence plate reading technology. Initial picture capture, image preprocessing, licence plate analysis, character segmentation, and recognition are the building blocks of licence plate recognition. The present analysis centres on the examination of the above key steps. In this paper, we introduce the latest research progress in the implementation of licence plate recognition utilizing deep learning techniques, including the classic framework of licence plate location and character recognition, representative methods, and their advantages and disadvantages. We also perform a quantitative comparison of existing representative methods. Finally, we summarize the challenges in the research domain of licence plate recognition and discuss the future development direction from the aspects of neural network interpretability, more general small sample learning methods, and incremental learning.
With the increasing demand for intelligent systems capable of comprehending visual information, the discipline of image object detection has experienced rapid expansion. Despite the fact that numerous methods have been proposed, the existing literature lacks exhaustive analyses and summaries of these methods. This paper seeks to address this deficiency by providing a thorough overview and analysis of image object detection techniques. This paper analyzes and discusses traditional methods and deep learning-based methods, with a focus on analyzing the current state and shortcomings of traditional methods. Further discussion is given to deep network-based object detection methods, mainly through a comparative analysis of two-stage and one-stage methods. The basic performance of the You Look Only Once (YOLO) series methods is highlighted. The contribution of large-scale datasets and evaluation metrics to the advancement of the state of the art is also examined. This comprehensive analysis is a useful reference for researchers who aim to contribute to the continual progress of image object detection.
This paper focuses on designing a robot that can efficiently relieve pressure and heal children with autism. autism, a common and unavoidable disease, happens in 1 in 100 children worldwide, and is not easy to recover, especially for children. While many organizations have explored the effectiveness of social robots, only a few robots are available that encompass helpful functions. In this paper, a few new functions or changes, such as portable design and more attractive and complex interaction design, can significantly improve the possibility of healing. The research incorporated the detailed design of the robot and meta-analysis as the data analysis method. Our data come from previous research from different areas of the world. The robot reveals that current robots are still incapable of comprehensive interactions and what future designs are expected to be included.
In traditional machine learning, supervised and semi-supervised learning is designed to be used in a closed-world setting where the training data is fixed and does not change over time. Unfortunately, these methods still require a large number of labels for the categories to be categorized, which is expensive and impractical. A new category discovery algorithm is designed so that it can discover new categories while classifying and recognizing labeled images. In this case, the machine can automatically identify new categories without manual marking of image feature categories, which can greatly reduce the cost of image classification. Kai Han et al. named this problem a new category discovery problem and proposed that deep clustering can be used to solve it well. This paper focuses on the comparison of two commonly used robust baselines in the new category discovery and proposes that adding a post-processing model can better improve the accuracy of the model result. This paper applied the relaxed contrast learning method to the Ranking Statistics, and the accuracy of CIFAR-100 is improved by 6%.
Customer churn has long been a concern for companies because it not only reduces the company's profit in the short term, but is also extremely detrimental to the company's growth in the long term. This paper focuses on the analysis of customer churn in banks by using two machine learning methods, namely logistic regression and decision tree, to predict the churn rate of customers and analyze the decision tree results based on the premise that decision trees are more accurate in prediction and do not have a large prediction bias for a certain group as logistic regression does. The results show that age, estimated salary and the number of products are important factors when predicting and customer groups with some specific characteristics will show a higher departure rate. To address this situation, this paper recommends that bankers continuously optimize their business systems and focus on user groups with high churn rates.
Unmanned Aerial Vehicle (UAV) are currently gaining popularity. This paper proposes a method to apply Linear Active Disturbance Rejection Control (LADRC) to Quadrotor Unmanned Aerial Vehicle (QUAV) controller to optimize the traditional PID controller. Firstly, the application and shortcomings of traditional PID control in UAV are introduced, and the LADRC method is proposed. Then linear simplification and Parameter Setting of ADRC are carried out. In the Simulink environment, according to the mathematical model of the QUAV, a QUAV dynamics simulation platform is established. Finally, according to different control channels, different control algorithms are designed, and tracking models are introduced in various attitudes to simulate and verify the control effect of LADRC. The results show that the LADRC controller is effective, the LADRC can be effectively combined with the traditional PID control. The application in the QUAV can realize more precise QUAV speed tracking control and the stable flight of the QUAV. Compared with the traditional PID controller, under the experimental conditions of this paper, the LADRC controller has more precise control accuracy and more efficient control efficiency. Finally, this paper summarizes the design of LADRC and makes a brief outlook on the development of UAVs.
This paper uses the literature reading method to systematically sort out and introduce the basic principles of the three algorithms of the transformer model and their application in the field of image classification, which has high theoretical value and social value, and has strong reference for the development of the transformer model in the future. ViT is simply an innovation in computer vision based on the transformer model. It first separates an image into several local patches (16x16), and then maps each one to a feature vector. These vectors will be delivered to an encoder for polishing. Finally, a special token is appended to these vectors for integrating location information. The final prediction is based on these tokesn. Swin-T is a new Transformer architecture, which is proposed by Microsoft Research to improve the performance of computer vision tasks. It adopts a new windowed feature extraction strategy, which can maintain high accuracy while significantly reducing the amount of computation and memory consumption. It has achieved leading performance in multiple computer vision tasks, becoming one of the most advanced visual Transformer models. In computer vision image classification, the information is highly redundant, the lack of an image piece, may not make the model produce much confusion, the model can be inferred from the surrounding pixel information, masked autoencoder (MAE) is to mask a high proportion of image pieces, create a difficult learning task, the method is simple but extremely effective.
Climate change is a major global challenge, and CO2 emissions are a significant contributor to this issue. This paper uses data visualization methods to explore the relationship between CO2 emissions and GDP growth. To ensure the reliability of research results, data is collected from some authoritative institutions, including the World Bank and the Intergovernmental Panel on Climate Change (IPCC). The paper uses regression analysis and data visualization techniques to analyze the relationship between CO2 emissions and GDP growth. The results indicate that there is a positive correlation between GDP growth and CO2 emissions, which suggests that economic growth is linked to an increase in emissions. The findings of this paper can contribute to a better understanding of the relationship between economic growth and environmental sustainability.
The purpose of Artificial Intelligence(AI) is to simulate learning process of human brain by strong computing power and appropriate algorithm, so that the machine can develop judging ability at work as human. Current AI mainly relies on Deep Learning model which is based on artificial neural network, like Convolutional Neural Network(CNN) in computer visualization, but that also takes with some defects. This paper introduces defects of CNN and discusses Transformer model in solving unexplainability of traditional CNN algorithm. To discuss why the Transformer model and attention mechanism are considered as the way to “AI intelligibility”.
Recently, artificial intelligence has great effort in social media. Style transfer is a subtask of deep learning, can transform images shared on social media into works with artistic painting style transfer through image recognition technology, which creating amazing effects. However, the current style transfer technology is still in the early stages of development, and can only convert the shared picture into the style of a certain artwork, but does not have the ability to convert the emotional expression of the picture. In this paper, we proposes an emotional expression framework based on social media style transfer, conducted an experiment in the LFW database where the same portrait photo can express different emotions by combining it with images of different styles and realizes the style transfer technology for emotion and emotional expression. Therefore, the application of style transfer technology in social media is of great value and can promote the intelligent development of social media.
In recent years, the pursue for academic degrees has intensified, leading to a surge in the number of undergraduate students applying for graduate programs at renowned universities worldwide. Consequently, universities have adopted a multifaceted approach to evaluate applicants, moving beyond traditional metrics like GPA to assess their overall potential. This study aims to comprehend the criteria employed by universities to select graduate applicants and assist undergraduate students in planning their academic trajectory. To achieve this, a diverse set of machine learning models are compared, including multiple linear regression and K-nearest neighbors, decision trees, support vector machines, and Bayesian classifiers. These models were trained with online admission probability data to predict the likelihood of admission and uncover the primary factors guiding university selection processes. The findings reveal that while research experience can enhance competitiveness in graduate admissions, academic indicators such as GPA, GRE scores, and language proficiency remain critical determinants of acceptance. Moreover, higher-ranked institutions exhibit a higher proportion of applicants with research experience. For candidates with strong GPAs, it is essential to demonstrate competitive language proficiency, augment research experience through well-crafted recommendation letters and personal statements. Conversely, applicants with lower GPAs should strive for outstanding GRE scores to compensate for academic performance.
In recent years, with the continuous advancement of science and technology, virtual reality technology has also been further developed and gradually applied to various fields. In the field of games, virtual reality technology has been fully applied, and virtual reality games provide the public with a more playable virtual space. At present, the research and analysis on virtual reality games is not sufficient enough to fully analyze and summarize the uniqueness of virtual reality games. Therefore, this paper makes the following research. First analyzes the characteristics of traditional computer games, then focuses on the application of virtual reality technology in games, elaborates from three aspects of game quality, authenticity and interactivity, and finally summarizes some shortcomings of the game at current virtual reality. Game quality refers to the improvement in game design, and the entire virtual reality game provides players with more realistic pictures and different interaction methods. At the end of the article, this article summarizes the full text and looks forward to the future research direction.
The birth of the Transformer revolutionarily signalled the start of a new epic chapter in the deep learning era. Through an encoder-decoder architecture, including residual connection, multi-head self-attention, etc., it completely reformed the deep models and unified the models used in traditional computer vision (CV) and natural language processing (NLP) problems. In recent years, many papers published have adapted the original Transformer model to better complete tasks in time series analysis, CV, and NLP. In the area of natural language processing, Bidirectional Encoder Representations from Transformers (BERT) employs a two-way transformer structure to learn context-based language representation, whereas Generative Pre-trained Transformer (GPT) employs a one-way transformer but enhances corpus training to enhance the model effect. The Vision Transformer model is the cornerstone of computer vision. It separates the input image into various patches, projects each patch into vectorized features, and then passes the them to Transformer. Based on the idea of the Vision Transformer, Swin Transformer and Biformer further optimized the Transformer and achieved better results. Time series combines the ideas embodied in CV and NLP, and in doing so, improves the specificity and various difficulties of time series problems to lower algorithm complexity and increase prediction accuracy. This article summarizes the uses and improvements of the Transformer in NLP, CV and time series, explores the development history and ideas on algorithm optimization, and predicts the potential developments of Transformer in these three fields.
The importance of high-resolution Synthetic Aperture Radar (SAR) imaging is undeniable in applications for Earth monitoring since it offers useful data for analysis. Post-image processing methods including edge and object detection, segmentation, and speckle noise removal are important to performed to find the information of images. The key technique that makes images aesthetically pleasing and understandable is despeckling. The presented study assays the latest trends & state of the art techniques for SAR image despeckling to gauge the performance & agility of existing techniques by performing a contemporary combination of theoretical & experimental analysis. This study analyses various techniques used in literature. The quantitative & qualitative analysis is done using Peak Signal-to-Noise Ratio (PSNR) and Universal Image Quality Index (UIQI) indices to find the better approach
The rapid urbanization and increasing number of vehicles on the roads demand efficient and accurate vehicle re-identification (Re-ID) techniques for intelligent transportation systems, traffic monitoring, and surveillance applications. This paper offers a detailed analysis of deep learning approaches to vehicle Re-ID, covering feature learning, attention mechanism, unsupervised learning, self-supervised learning, and specialized loss function. The efficiency of these methods is assessed using VeRi-776 and VehicleID datasets. Key metrics, such as mean Average Precision (mAP) and Rank-n accuracy, are employed to gauge their success. Results show that feature learning, attention mechanism, and specialized loss function play pivotal roles in achieving high performance in vehicle Re-ID tasks, with unsupervised and self-supervised learning methods displaying potential for practical applications due to their scalability. The paper also highlights several challenges, including enhancing the interpretability of attention mechanisms, exploring the relationship between popular loss functions, and addressing the infeasibility of existing methods for real-time applications. The paper concludes with several recommendations for the prospects of vehicle Re-ID, including developing advanced real-time algorithms, enhancing deep learning techniques, investigating innovative approaches, and addressing existing challenges. These proposed advances could significantly improve the efficacy of feature learning and spark fresh innovations in this field.
With the quick advancement of the level of information science in shield tunnel construction, the monitoring methods of shield equipment during tunnel boring work are increasingly improved and the recorded construction data includes not only information on the internal workings of the shield equipment, but also on its interaction with the external strata. Machine learning data analysis is powerful and has a wider range of applications and scope than traditional data analysis methods in the civil construction industry. Through the use of machine learning methods, the data and information collected can be mined and analysed in depth to find the intrinsic connections and linkages that can help improve the safety and efficiency ragarding shield tunnel construction. This work presents a literature analysis of current situations of machine learning for shield tunnel construction at home and abroad, briefly describes the basic principles of machine learning methods, summarises and analyses the research situation in shield tunnel construction, reviews the progress of machine learning-based shield equipment condition analysis, intelligent prediction and control methods for shield tunneling parameters and shield tunneling surface deformation prediction, and summarises the current research The study also summarises the shortcomings of current research. Finally, an outlook on the development of shield tunneling towards intelligence is presented.