Applied and Computational Engineering
- The Open Access Proceedings Series for Conferences
Proceedings of the 3rd International Conference on Signal Processing and Machine Learning
2023-02-25
978-1-915371-55-3 (Print)
978-1-915371-56-0 (Online)
2023-06-14
Omer Burak Istanbullu, Eskisehir Osmangazi University
The implementation of object detection algorithms would be helpful to the various fields of the current time. When object detection is applied to the surveillance camera system, it will be more efficient to locate crimes or find lost kids. This paper will investigate the performance of different object detection algorithms in a real-world scenario. With experimentation, CenterNet++ outperforms YOLO and MaskRCNN, two traditional and classic object detection algorithms, on the MS COCO dataset, which concludes that CenterNet++ can ensure both accuracy and speed.
With the development of the times, unmanned aerial vehicles (UAVs) are increasingly used in power systems, especially in power line patrol. At the same time, The charging method of UAV power patrol system has attracted more and more attention. Therefore, this paper analyzes the different ways of wireless charging of UAVs applied in power circuit system. It includes self-resonant ZVS topology, Double LCCL topology, LCL-LCL /S hybrid self-switching resonant wireless charging system and wireless charging method using solar energy. At the same time, the future development direction of UAV wireless charging in this field is prospected.
Today's popularisation of portable devices largely depends on the progress in integrated circuits. Modern Very Large Scale Integration technology (VLSI) allows billions of transistors to be packed into the same chip. In the past years, digital design in VLSI has been developed compared to analogue design. The traditional method is hard to model the performance change in analogue or mixed-signal components caused by physical design. In the early 2000s, rapid advances in machine learning and computing power made analogue design automation possible. Despite their outstanding performance, the transparency issue has become significant. This paper introduces the history of VLSI physical design, which includes placement and routing in the early stages. The change that machine learning (ML) has made is mentioned in the third section. Analysis of the potential problem has been proposed, followed by a brief category of some well-known work in interpretable Machine Learning, which could be the primary direction for VLSI automation to be further popularised in the future.
Technological revolution brought major changes in the system framework of strategic stability. Artificial intelligence (AI) thrives internationally as a disruptive technology and is applied to many fields in the 21st century. This paper evaluates the strength, limitations, and impacts of AI-empowered military application on international strategic stability. Applications of AI technology for military purpose brings both positive and negative impact on nations’ defending and offending, so the international strategic stability. However, the impact of AI on international strategic stability is mainly negative. For facing stability challenges, nations shall formulate systematic governance of military AI; the global community shall promote friendly multilateral cooperation between each other. In the end, this view offers significant implications for maintaining international strategic stability and improving AI governance capabilities in the foreseeable future.
Information extraction is an important part of natural language processing and is an important basis for building question and answer systems and knowledge graphs. A growing number of new technologies are being applied to information extraction with the development of deep learning techniques. As a first step, this paper introduces information extraction techniques and their main tasks, then describes the development history of information extraction techniques, and introduces the practice and application of different types of information extraction techniques in knowledge graph construction, including entity-extraction, relationship extraction and attribute extraction. Finally, some problems and research directions faced by information extraction techniques are discussed.
The new electronic devices represented by SiC have brought new opportunities to improve the efficiency and power density of the drive circuit system. However, the complex circuit drive system using the new electronic devices still faces many challenges to give full play to its advantages. This paper focuses on the electromagnetic interference of drive circuit. Research on Filter Circuit, through in-depth study of the filter circuit, several second-order active circuits are proposed to filter the signal interference of the conduction path. Finally, an adjustable quality factor bandstop circuit is designed. The proposed circuit has good adjustment and feedback functions when filtering interference signals.
In this research, the 4-Link Robotic Simulator program in MATLAB is utilized to find the relation between the length of the links and the length of the two feet. While a few previous research focused on the zero-moment point (ZMP) of the robot geometry, this neglects the change of the position of ZMP in the processes of locomotion. The simulation model is introduced to present an integrated locomotion process of the robot. The testing method was developed based on the comparison between the simulation results from MATLAB. Three different scenarios are analyzed and compared to the original outcome. The main finding of this research is concluded as the length of L5 and L6 should not be smaller than the length of L2 and L3. The critical length of the L5 and L6 was determined as 3.0065.
In order to have a more comprehensive introduction and understanding of the re-search progress of recall strategies in recommender systems, this paper reviews the application of diverse recall methods in various recommender systems by different researchers. By searching and reading literature in major databases like Google Scholar, it is found that the recall method suitable for news recommendation system is also generally applicable in other recommendation systems. Therefore, this paper takes news recommendation system as an example to introduce traditional content-based recall and collaborative filtering-based methods. Hot-based recall and Embed-ding-based recall also developed in recent years. Furthermore, recall strategies (emo-tion-based recall and UIBB) that are specifically applicable to music and e-commerce recommendation systems are introduced. This paper briefly introduces these recall styles and collects researchers' evaluations and attitudes towards these recall styles, aiming to provide help for recommender system designers in optimizing recall methods.
As the computer level advances in 1990s, CFD (the computational fluid dynamics, for short) has promoted fast. The process of using CFD to analyze complex or ideal conditions has a very wide application background. However, confronted by cases with higher accuracy as well as larger number of samples, CFD needs to spend much time and money for the sake of resolving the issues. ML (Machine learning, for short) approach provides a promising choice for CFD. This paper reviews the coupling of ML and CFD and the progress in promoting the application of CFD. This paper briefly introduces CFD along with ML approaches, such as supervised learning, unsupervised learning and reinforcement learning. This article also discusses challenges and issues with the aim of being resolved in the research of ML model based on CFD, such as using multiple machine learning models or hybrid models to solve problems and quantifying the uncertainty of machine learning models. If these problems are solved, ML method can provide a promising development prospect for CFD.
The popularization of electronic equipment makes the weak current system more and more important. This paper mainly describes the development of intelligent weak current engineering system in China and the application technology of intelligent weak current engineering system in the field of intelligent building in today's advanced science and technology and people's increasingly high demand for housing. On this basis, the development direction of the intelligent weak current engineering system in China is expounded, and a concrete example is given to illustrate how the intelligent weak current engineering system can realize the comfort and safety of people's houses in real life. It is believed that with the economic development of China and the improvement of intelligent weak current engineering system technology, people's working environment in intelligent buildings will become better and more convenient.
With the rapid development of big data technology, people's demand for personalized music recommendation systems is growing more and more urgent. However, the current music recommendation system still has some problems, such as inaccurate recommendations and too slow recommendation speeds, as well as cold starts and data sparsity caused by massive data. In order to design and implement a music recommendation system for the recommendation system storage caused by the continuous increase of data, insufficient storage, and computing power, this paper improved the QQ music recommendation system based on the collaborative filtering recommendation algorithm of the offline data warehouse technology project. After testing, the music recommendation system designed in this paper has good stability, scalability, and efficiency.
The development and application of machine learning has been a hot topic in recent years, and enabling machines to read people's emotions is of great importance in many aspects of social development, such as healthcare, products development and human-computer interaction. This study recognizes seven human emotions by building a model of a neural network and training dataset. Using a large-scaled video dataset, Dynamic Facial Expression in-the-Wild (DFEW), trains the ResNet50 model and outputs the precisions. There are three possible training results, if the accuracies are all above 80%, then the neural network is well-performed; if partial classification accuracies is above 80% while others are above 50%, the neural network is partial well-performed; if the accuracies are all under 50%, the neural network is poor-performed.
The e-passport is a bold approach which companied with Biometric Identification and Radio Frequency Identification. By employing the RSA encryption algorithm, e-passport have become effective for protecting the security during the transmission of information. This paper introduces the application of the RSA encryption algorithm in the e-passport, the possible security risks and vulnerabilities of active authentication, and the implementation of RSA encryption and decryption. The processing of RSA encryption and decryption is simulated by using Python, and the advantages and disadvantages of using this implementation in an e-passport are discussed. Because this paper uses a relatively basic RSA algorithm logic, there are certain security issues when applying it to an actual e-passport.
The community’s perspectives and comments are a valuable resource for businesses and other organizations. In the past, businesses used inefficient procedures. Now that social media is the new trend, it enables an unprecedented level of analysis and evaluation. This enables unprecedented analysis and evaluation of various factors. This enables unprecedented analysis and evaluation of a wide range of topics and components in different contexts and settings. Throughout business history, these strategies have been expected. This field of study is called “sentiment analysis.” SVM was used to analyze sentiment for this research project. One of these duties required an SVM(SVM). Support vector machines, or SVM, is a popular supervised machine learning algorithm for determining text polarity. SVM abbreviates support vector machines. Precision, recall, and F-measure are used to evaluate SVM using two datasets of pre-classified tweets. Tables and graphs are used to communicate research findings. This research classifies tweets about US-Airlines and performs sentiment analysis with an accuracy of 91.8 percent, precision of 91.3 percent, and recall of 82.3 percent, as well as the F1 of 86.9 percent.
Machine learning is a field of study where the computer can learn for itself without a human explicitly hardcoding the knowledge for it. These algorithms make up the backbone of machine learning. This paper aims to study the field of machine learning and its algorithms. It will examine different types of machine learning models and introduce their most popular algorithms. The methodology of this paper is a literature review, which examines the most commonly used machine learning algorithms in the current field. Such algorithms include Nave Bayes, Decision Tree, KNN, and K-Mean Cluster. Nowadays, machine learning is everywhere and almost everyone using a technology product is enjoying its convenience. Applications like spam mail classification, image recognition, personalized product recommendations, and natural language processing all use machine learning algorithms. The conclusion is that there is no single algorithm that can solve all the problems. The choice of the use of algorithms and models must depend on the specific problem.
In the recent years, the boom in technology industries has been greatly accelerated by the development of artificial intelligence (AI). AI, which is based on machine learning (ML), can only be developed rapidly because of the continuously increasing computational capacity of AI processors. Compared to general-purpose processors (GPPs), AI processors have specially designed architectures to accelerate the operations of AI applications, such as convolution, matrix, and massive parallel computing. The objectives of this paper are: (1) to illustrate the differences between general-purpose processors and AI processors; (2) to summarise the characteristic three mainstream AI processors: GPU, FPGA and ASIC, and draw a comparison among them. It shows that GPUs provide very competitive performance with high power consumption; FPGAs can offer high efficiency at low cost; and AISCs provide the highest performance with the lowest power consumption, but cost the most.
The whole furniture industry customization boom creates more and more significant demand for digitalization, and the impact of the epidemic on the furniture industry has further made many furniture companies realize the importance of industry digitalization. And in order to achieve the digital transformation of furniture enterprises, the collection and application of big data is the key: through the collection and use of customer and equipment data, you can achieve optimization of the supply chain in the furniture industry to reduce the inventory of raw materials and finished products; to achieve digital design and production intelligence to improve production efficiency and material utilization; to analyze the target customer groups and improve marketing strategies to attract more customers and increase turnover. In recent years, many scholars have also provided ideas on specific approaches to digital transformation in the furniture industry. This paper uses the literature review method and summary induction method to summarize the role that big data can play in the digital transformation of the furniture industry from three aspects: furniture production supply chain, furniture design digitalization and manufacturing digitalization, and furniture marketing.
Since the end of 2019, the virus has gradually spread and eventually spread globally. In this context, it is important to control the spread of COVID-19 quickly. This project attempts to use artificial intelligence to identify CT images of the lungs of COVID-19 patients and facilitate rapid screening of COVID-19 patients. The main focus of this study is to use artificial intelligence based on model transfer deep learning to identify whether patients are infected with novel coronavirus through patient lung images. The difficulty of this task is that the number of lung images of COVID-19 patients is very limited, which makes it very difficult to train traditional neural networks. Traditional computer vision deep learning to extract image features requires a large number of sample data for model training. If the number of images in the data set is too small, the model will overfit and fail to achieve relatively accurate COVID-19 identification effect. To solve the above problems, this paper studied novel coronavirus identification of patients' lung CT images by deep learning method based on model transfer. We build models based on similar types of problems, store those models and then fine-tune them. Eventually, a model was trained to recognize images of the lungs of COVID-19 patients. The method was tested on publicly available COVID-19 datasets, and the results showed that the identification accuracy of the method was about 70%.
One of the most crucial components of a Bayer mosaic pattern image from a charge-coupled device is image interpolation. There are two common problems when processing the images. First one is false colouring when erroneously interpolating across an edge rather than along it results in sudden or unexpected colour changes. The other is Zipper effect caused by the demosaicing algorithm's propensity to average pixel values along edges, particularly in the red and blue planes, which blurs edges. This paper proposed two image optimization algorithms to solve the aforementioned problems: Hibbard-based edge improvement algorithm and Clustering-based colour interpolation. The improved Hibbard algorithm is used in this paper together with variance comparison, diagonal gradient computation, and clustering approach to complete image optimization. In this experiment, the edge interpolation effect yields a better result. The experimental show that the algorithm can eliminate the zipper effect of feature edges better and obtain clearer edge features.
Stock forecasting aims to predict future stock prices based on past price changes in the market, playing an essential role in the field of financial transactions. However, since the stock market is highly uncertain, stock prediction is complex and challenging. This paper uses the long short-term memory (LSTM) model to predict the stock market and compares it with the current stock prediction algorithm. Firstly, we preprocessed the raw dataset and normalized data into the range from 0 to 1. Secondly, we introduced the LSTM model and improved its performance by tuning four parameters: learning rate, number of hidden layers, number of epochs, and batch size. Finally, we use four evaluation metrics to evaluate models: mean average error (MAE), root mean square error (RMSE), coefficient of determination (R2), and mean absolute error percentage (MAPE). Our LSTM model performs better than the previous model in experiments in terms of MAE, RMSE, R2, and MAPE.