EAI Endorsed Transactions on Internet of Things
https://publications.eai.eu/index.php/IoT
<p>EAI Endorsed Transactions on Internet of Things is open access, a peer-reviewed scholarly journal focused on all areas related to the technologies and application fields related to the Internet of Things. The journal publishes research articles, review articles, commentaries, editorials, technical articles, and short communications on a quarterly frequency. Authors are not charged for article submission and processing.</p> <p><strong>INDEXING</strong>: Scopus, DOAJ, CrossRef, Google Scholar, ProQuest, EBSCO, CNKI, Dimensions</p>European Alliance for Innovation (EAI)en-USEAI Endorsed Transactions on Internet of Things2414-1399<p>This is an open-access article distributed under the terms of the Creative Commons Attribution <a href="https://creativecommons.org/licenses/by/3.0/" target="_blank" rel="noopener">CC BY 3.0</a> license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.</p>Machine Learning Models in the large-scale prediction of parking space availability for sustainable cities
https://publications.eai.eu/index.php/IoT/article/view/2269
<p>The search for effective solutions to address traffic congestion presents a significant challenge for large urban cities. Analysis of urban traffic congestion has revealed that more than 70% of it can be attributed to prolonged searches for parking spaces. Consequently, accurate prediction of parking space availability in advance can play a vital role in assisting drivers to find vacant parking spaces quickly. Such solutions hold the potential to reduce traffic congestion and mitigate its detrimental impacts on the environment, economy, and public health. Machine learning algorithms have emerged as promising approaches for predicting parking space availability. However, comparative studies on those machine learning models to evaluate the best suited for a large-scale prediction and within a given prediction time period are missing.<br />In this study, we compared nine machine learning algorithms to assess their efficiency in predicting long-term, large-scale parking space availability. Our comparison was based on two approaches: using on-street parking data alone and 2) incorporating data from external sources (such as weather data). We used automatic machine learning models to compare the performance of different algorithms according to the prediction efficiency and execution time. Our results indicated that the automated machine learning models implemented were well fitted to our data. Notably, the Extra Tree and Random Forest algorithms demonstrated the highest efficiency among the models tested. Moreover, we observed that the Random Forest algorithm exhibited less computational demand than the Extra Tree algorithm, making it particularly advantageous in terms of execution time. Therefore, this work suggests that the Random Forest algorithm is the most suitable machine learning model in terms of efficiency and execution time for accurately predicting large-scale, long-term parking space availability.</p>Abdoul Nasser Hamidou SoumanaMohamed Ben SalahSoufiane IdbraimAbdellah Boulouz
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-302023-11-301010.4108/eetiot.2269Deciphering Microorganisms through Intelligent Image Recognition: Machine Learning and Deep Learning Approaches, Challenges, and Advancements
https://publications.eai.eu/index.php/IoT/article/view/4484
<p>Microorganisms are pervasive and have a significant impact in various fields such as healthcare, environmental monitoring, and biotechnology. Accurate classification and identification of microorganisms are crucial for professionals in diverse areas, including clinical microbiology, agriculture, and food production. Traditional methods for analyzing microorganisms, like culture techniques and manual microscopy, can be labor-intensive, expensive, and occasionally inadequate due to morphological similarities between different species. As a result, there is an increasing need for intelligent image recognition systems to automate microorganism classification procedures with minimal human involvement. In this paper, we present an in-depth analysis of ML and DL perspectives used for the precise recognition and classification of microorganism images, utilizing a dataset comprising eight distinct microorganism types: Spherical bacteria, Amoeba, Hydra, Paramecium, Rod bacteria, Spiral bacteria, Euglena and Yeast. We employed several ml algorithms including SVM, Random Forest, and KNN, as well as the deep learning algorithm CNN. Among these methods, the highest accuracy was achieved using the CNN approach. We delve into current techniques, challenges, and advancements, highlighting opportunities for further progress.</p>Syed KhasimHritwik GhoshIrfan Sadiq RahatKareemulla ShaikManava Yesubabu
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-272023-11-271010.4108/eetiot.4484Analysis of Current Advancement in 3D Point Cloud Semantic Segmentation
https://publications.eai.eu/index.php/IoT/article/view/4495
<p>INTRODUCTION: The division of a 3D point cloud into various meaningful regions or objects is known as point cloud segmentation.</p><p>OBJECTIVES: The paper discusses the challenges faced in 3D point cloud segmentation, such as the high dimensionality of point cloud data, noise, and varying point densities.</p><p>METHODS: The paper compares several commonly used datasets in the field, including the ModelNet, ScanNet, S3DIS, and Semantic 3D datasets, ApploloCar3D, and provides an analysis of the strengths and weaknesses of each dataset. Also provides an overview of the papers that uses Traditional clustering techniques, deep learning-based methods, and hybrid approaches in point cloud semantic segmentation. The report also discusses the benefits and drawbacks of each approach.</p><p>CONCLUSION: This study sheds light on the state of the art in semantic segmentation of 3D point clouds.</p>Koneru Pranav SaiSagar Dhanraj Pande
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-282023-11-281010.4108/eetiot.4495A Comparative Analysis of Various Deep-Learning Models for Noise Suppression
https://publications.eai.eu/index.php/IoT/article/view/4502
<p class="ICST-abstracttext"><span lang="EN-GB">Excessive noise in speech communication systems is a major issue affecting various fields, including teleconferencing and hearing aid systems. To tackle this issue, various deep-learning models have been proposed, with autoencoder-based models showing remarkable results. In this paper, we present a comparative analysis of four different deep learning based autoencoder models, namely model ‘alpha’, model ‘beta’, model ‘gamma’, and model ‘delta’ for noise suppression in speech signals. The performance of each model was evaluated using objective metric, mean squared error (MSE). Our experimental results showed that the model ‘alpha’ outperformed the other models, achieving a minimum error of 0.0086 and maximum error of 0.0158. The model ‘gamma’ also performed well, with a minimum error of 0.0169 and maximum error of 0.0216. These findings suggest that the pro-posed models have great potential for enhancing speech communication systems in various fields.</span></p>Henil GajjarTrushti SelarkaAbsar M. LakdawalaDhaval B. ShahP. N. Kapil
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-292023-11-291010.4108/eetiot.4502Demand Forecasting and Budget Planning for Automotive Supply Chain
https://publications.eai.eu/index.php/IoT/article/view/4514
<p class="ICST-abstracttext"><span lang="EN-GB">Over the past 20 years, there have been significant changes in the supply chain business. One of the most significant changes has been the development of supply chain management systems. It is now essential to use cutting-edge technologies to maintain competitiveness in a highly dynamic environment. Restocking inventories is one of a supplier’s main survival strategies and knowing what expenses to expect in the next month aids in better decision-making. This study aims to solve the three most common industry problems in Supply Chain – Inventory Management, Budget Fore-casting, and Cost vs Benefit of every supplier. The selection of the best forecasting model is still a major problem in much research in literature. In this context, this article aims to compare the performances of Auto-Regressive Integrated Moving Average (ARIMA), Holt-Winters (HW), and Long Short-Term Memory (LSTM) models for the prediction of a time series formed by the dataset of Supply Chain products. As performance measures, metric analysis of the Root Mean Square Error (RMSE) is used. The main concentration is on the Automotive Business Unit with the top 3 products under this segment and the country United States being in focus. All three models, ARIMA, HW, and LSTM obtained better results regarding the performance metrics.</span></p>Anand LimbareRashmi Agarwal
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-302023-11-301010.4108/eetiot.4514Development and Simulation Two Wireless Hosts Communication Network Using Omnnet++
https://publications.eai.eu/index.php/IoT/article/view/4519
<p class="ICST-abstracttext"><span lang="EN-GB">A wireless network is a collection of computers and other electronic devices that exchange information by means of radio waves. Endpoint computing devices can all be connected without the need for hardwired data cabling thanks to the prevalence of wireless networks in today's businesses and networks. This paper's aim is to create and construct a wireless network model for connecting two hosts which will be implemented to simulate wireless communications. The sending of User Datagram Protocol (UPD) data by one of the hosts to the other one has been wirelessly specified by the simulator. Additionally, the protocol models were kept as simple as possible including both the physical layer and the lower layer. The architecture and functionality of a new simulator is showed its ability to solve the issues of making a host move, especially, when it gets out of the range the simulation ends.</span></p>M. Derbali
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-302023-11-301010.4108/eetiot.4519Enhancing Real-time Object Detection with YOLO Algorithm
https://publications.eai.eu/index.php/IoT/article/view/4541
<p class="ICST-abstracttext"><span lang="EN-GB">This paper introduces YOLO, the best approach to object detection. Real-time detection plays a significant role in various domains like video surveillance, computer vision, autonomous driving and the operation of robots. YOLO algorithm has emerged as a well-liked and structured solution for real-time object detection due to its ability to detect items in one operation through the neural network. This research article seeks to lay out an extensive understanding of the defined Yolo algorithm, its architecture, and its impact on real-time object detection. This detection will be identified as a regression problem by frame object detection to spatially separated bounding boxes. Tasks like recognition, detection, localization, or finding widespread applicability in the best real-world scenarios, make object detection a crucial subdivision of computer vision. This algorithm detects objects in real-time using convolutional neural networks (CNN). Overall this research paper serves as a comprehensive guide to understanding the detection of objects in real-time using the You Only Look Once (YOLO) algorithm. By examining architecture, variations, and implementation details the reader can gain an understanding of YOLO’s capability.</span></p>Gudala LavanyaSagar Dhanraj Pande
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-052023-12-051010.4108/eetiot.4541Comprehensive Analysis of Blockchain Algorithms
https://publications.eai.eu/index.php/IoT/article/view/4549
<p>INTRODUCTION: Blockchain technology has gained significant attention across various sectors as a distributed ledger solution. To comprehend its applicability and potential, a comprehensive understanding of blockchain's essential elements, functional traits, and architectural design is imperative. Consensus algorithms play a critical role in ensuring the proper operation and security of blockchain networks. Consensus algorithms play a vital role in maintaining the proper operation of a blockchain network, and their selection is crucial for optimal performance and security.</p><p>OBJECTIVES: The objective of this research is to analyse and compare various consensus algorithms based on their performance and efficiency in mining blocks.</p><p>METHODS: To achieve this, an experimental model was developed to measure the number of mined blocks over time for different consensus algorithms.</p><p>RESULTS: The results provide valuable insights into the effectiveness and scalability of these algorithms. The findings of this study contribute to the understanding of consensus algorithm selection and its impact on the overall performance of blockchain systems.</p><p>CONCLUSION: The findings of this study contribute to the understanding of consensus algorithm selection and its impact on the overall performance of blockchain systems. By enhancing our knowledge of consensus algorithms, this research aims to facilitate the development of more secure and efficient blockchain applications.</p>Prabhat Kumar TiwariNidhi AgarwalShabaj AnsariMohammad Asif
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-062023-12-061010.4108/eetiot.4549Cloud Computing: Optimization using Particle Swarm Optimization to Improve Performance of Cloud
https://publications.eai.eu/index.php/IoT/article/view/4577
<p>INTRODUCTION: In the contemporary world cloud computing is acknowledged as advanced technology to manage and store huge amount of data over the network. To handle the network traffic and effective task scheduling some efficient load balancing algorithm should be implemented. This can reduce the network traffic and overcome the problem of limited bandwidth. The various research articles represents ample amount of optimization techniques to overcome the transfer of data with limited bandwidth. Among all, few solutions has been chosen for current research article such as – optimization of load distribution of various resources provided by cloud.</p><p>OBJECTIVES: In this paper, Comparative analysis of various task scheduling algorithms such as (FCFS, SJF, Round Robin & PSO) have been proposed in current research article to accumulate the outcome and evaluate the overall performance of cloud at different number of processing elements (pesNumber) .</p><p>METHODS: Overall performance of task scheduling is significantly enhanced by PSO Algorithm implemented on cloud in comparison of FCFS, SJF and Round Robin. Outcomes of optimization technique has been implemented and tested over the CloudSim simulator.</p><p>RESULTS: The comparative analysis conducted based on scalability for increasing the number of processing elements over the cloud. The major insight of proposed algorithm has shows that results are still better when number of VMs is increased and it successfully minimizes waiting time and turnaround time and completion time by 43% which is significantly high than outcomes of existing research articles.</p><p>CONCLUSION: To optimize the task scheduling in cloud computing, comparative analysis of various task scheduling algorithms has been proposed, including Particle Swarm Optimization algorithm.</p>NidhiMalti NagleVashal Nagar
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-122023-12-121010.4108/eetiot.4577Survey of Accuracy Prediction on the PlantVillage Dataset using different ML techniques
https://publications.eai.eu/index.php/IoT/article/view/4578
<p class="ICST-abstracttext"><span lang="EN-GB">A plant is susceptible to numerous illnesses while it is growing. The early detection of plant illnesses is one of the most serious problems in agriculture. Plant disease outbreaks may have a remarkable impact on crop yield, slowing the rate of the nation's economic growth. Early plant disease detection and treatment are possible using deep learning, computer-vision, and ML techniques. The methods used for the categorization of plant diseases even outperformed human performance and conventional image-processing-based methods. In this context, we review 48 works over the last five years that address problems with disease detection, dataset properties, the crops under study, and pathogens in various ways. The research results discussed in this paper, with a focus on work published between 2015 and 2023, demonstrate that among numerous techniques (MobileNetV2, K-Means+GLCM+SVM, Residual Teacher-Student CNN, SVM+K-Means+ANN, AlexNet, AlexNet with Learning from Scratch, AlexNet with Transfer Learning, VGG16, GoogleNet with Training from Scratch, GoogleNet with Transfer Learning) applied on the PlantVillage Dataset, the architecture AlexNet with Transfer Learning identified diseases with the highest accuracy.</span></p>Vaishnavi PandeyUtkarsh TripathiVimal Kumar SinghYouvraj Singh GaurDeepak Gupta
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-122023-12-121010.4108/eetiot.4578Using Deep Learning and Machine Learning: Real-Time Discernment and Diagnostics of Rice-Leaf Diseases in Bangladesh
https://publications.eai.eu/index.php/IoT/article/view/4579
<p>Bangladesh is heavily reliant on rice production, but a staggering annual decline of 37% in rice output due to insufficient knowledge in recognizing and managing rice plant diseases has raised concerns. As a result, there is a pressing need for a system that can accurately identify and control rice plant diseases automatically. CNNs have demonstrated their effectiveness in detecting plant diseases, thanks to their exceptional image classification capabilities. Nevertheless, research on rice plant disease identification remains scarce. This study offers a comprehensive overview of rice plant ailments and explores DL techniques used for their detection. By evaluating the advantages and disadvantages of various systems found in the literature, the study aims to identify the most accurate means of detecting and controlling rice plant diseases using DL techniques. We present a real-time detection and diagnostic system for rice lead diseases that utilizes ML methods. This system is designed to identify three prevalent rice plant diseases, specially leaf smut, bacterial leaf blight and brown spot diseases. Clear images of affected rice leaves against a white background serve as input data for the system. To train the dataset, several ML algorithms were employed including KNN, Naive Bayes, J48 and Logistic Regression. Following the pre-processing stage, the decision tree algorithm demonstrated an accurateness of over 97% when claimed to test dataset. In conclusion, implementing an automated system that leverages ML techniques is vital for reducing the time and labor required for detecting and managing rice plant diseases. Such a system would contribute significantly to ensuring the healthy growth of rice plants in Bangladesh, ultimately boosting the nation’s rice production.</p>Syed KhasimIrfan Sadiq RahatHritwik GhoshKareemulla ShaikSujit Kumar Panda
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-122023-12-121010.4108/eetiot.4579Mobile Security Operation Centre (mSOC)
https://publications.eai.eu/index.php/IoT/article/view/4586
<p>Attacks on the internet are becoming increasingly threatening. For naïve home users, who are poorly protected, there is always an imminent danger of getting cyber attacked. </p><p>This paper is aimed to design and build an IoT-based Network Security device that would run as an access point for users to connect to the Internet in a home setting. The paper discusses a standalone perimeter security solution with Incident Response (IR) life cycle management and controls through an IoT device – Raspberry PI. Enterprise-level features such as Next Generation Firewall (NGFW), Network Intrusion Detection System (NIDS), Domain Control for Ad/Spam blocking, Security Information and Event Management (SIEM) for Log Co-ran System on Chip (SoC), which can be installed anywhere and carried for mobile operations. Hence, the name, Mobile Security Operation Centre (mSOC).</p><p>This solution intends to protect the user when browsing the internet and blocking or providing visibility to the malicious connections made to or from users. The mSOC can filter domains based on whitelist/blacklist and Regex Pattern. It can also identify the domains that are blocked or allowed. It also provides visibility to traffic, application statistics, and IP reputation. IP reputation and Malicious Domains then can act as input to the iptables for L3/L4 blocking. A Software User Interface is developed to integrate and manage multiple Open-Sourced applications like dnsmasq/ elk/ graylog/ SQlite3/ Iptables/ adminlte as a single product that could serve as a complete security solution for a home or Small Medium Business (SMB). Thus, the proposed solution secures naïve users from security exploitations.</p>Sudhir WaliaQazi Mustafa KaleemShinu Abhi
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4586Security Methods to Improve Quality of Service
https://publications.eai.eu/index.php/IoT/article/view/4587
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Security and Quality of Service (QoS) are two of the most critical aspects of communication networks. Security measures are implemented to protect the network from unauthorized access and malicious attacks, whereas QoS measures are implemented to ensure that the network is reliable, efficient, and can meet the demands of users. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: This paper examines various methods of network security and their impact on the quality of service (QoS) in computer networks. The study analyses different types of network attacks, such as denial of service (DoS), distributed denial of service (DDoS), and intrusion attempts, and their impact on QoS. The paper also explores various security mechanisms, such as intrusion detection and prevention systems (IDPS), firewalls, virtual private networks (VPNs), and techniques for encryption, that can help mitigate network security threats while maintaining QoS.METHODS: The study evaluates the strengths and weaknesses of the security mechanisms in terms of their ability to provide protection against network attacks while minimizing the impact on QoS. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The paper provides recommendations for organizations to enhance their network security posture while improving the QoS, such as implementing robust network security policies, investing in advanced security tools, and training employees to recognize and respond to network security incidents.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: This paper offers a comprehensive analysis of network security methods and their impact on QoS, providing insights and recommendations for organizations to improve their network security posture and maintain a high level of QoS.hese are the conclusions of this paper.</span></p>Nidhi AgarwalAnjaliAnuj Singh ChauhanAnkit Kumar
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4587A Hybrid Deep Learning GRU based Approach for Text Classification using Word Embedding
https://publications.eai.eu/index.php/IoT/article/view/4590
<p class="ICST-abstracttext"><span lang="EN-GB">Text categorization has become an increasingly important issue for businesses that handle massive volumes of data generated online, and it has found substantial use in the field of NLP. The capacity to group texts into separate categories is crucial for users to effectively retain and utilize important information. Our goal is to improve upon existing recurrent neural network (RNN) techniques for text classification by creating a deep learning strategy through our study. Raising the quality of the classifications made is the main difficulty in text classification, nevertheless, as the overall efficacy of text classification is often hampered by the data semantics' inadequate context sensitivity. Our study presents a unified approach to examine the effects of word embedding and the GRU on text classification to address this difficulty. In this study, we use the TREC standard dataset. RCNN has four convolution layers, four LSTM levels, and two GRU layers. RNN, on the other hand, has four GRU layers and four LSTM levels. One kind of recurrent neural network (RNN) that is well-known for its comprehension of sequential data is the gated recurrent unit (GRU). We found in our tests that words with comparable meanings are typically found near each other in embedding spaces. The trials' findings demonstrate that our hybrid GRU model is capable of efficiently picking up word usage patterns from the provided training set. Remember that the depth and breadth of the training data greatly influence the model's effectiveness. Our suggested method performs remarkably well when compared to other well-known recurrent algorithms such as RNN, MV-RNN, and LSTM on a single benchmark dataset. In comparison to the hybrid GRU's F-measure 0.952, the proposed model's F-measure is 0.982%. We compared the performance of the proposed method to that of the three most popular recurrent neural network designs at the moment RNNs, MV-RNNs, and LSTMs, and found that the new method achieved better results on two benchmark datasets, both in terms of accuracy and error rate.</span></p>Poluru EswaraiahHussain Syed
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4590A Study on the Performance of Deep Learning Models for Leaf Disease Detection
https://publications.eai.eu/index.php/IoT/article/view/4592
<p class="ICST-abstracttext"><span lang="EN-GB">The backbone of our Indian economy is agriculture. Plant diseases are a key contributor to substantial reductions in crop quality and quantity. Finding leaf diseases is a crucial job in the study of plant pathology. So, Deep learning models are essential for classification objectives with positive outcomes. Many different methods have been employed in recent years to classify plant diseases. This work has aided in identifying and categorizing a plant leaf disease. Images of Tomato, Potato, and Pepper plant leaves from the PlantVillage Database, which includes fifteen disease classifications, were used in this study. The pre-trained Deep learning models like InceptionV3, MobileNet, DenseNet121, Inception-ResNetV2, and ResNet152V2 are utilized to diagnose leaf diseases. The classification of both healthy and various sorts of leaf illnesses is </span><span lang="EN-GB">taught to deep learning models.</span></p><p class="ICST-abstracttext" style="text-align: left;" align="left"> </p>G SucharithaM SirishaK PravalikaK. Navya Gayathri
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4592A Hybrid Approach for Mobile Phone Recommendation using Content-Based and Collaborative Filtering
https://publications.eai.eu/index.php/IoT/article/view/4594
<p>INTRODUCTION: The number of manufacturers and models accessible in the market has increased due to the growing trend of mobile phone use. Customers now have the difficult task of selecting a phone that both fits their demands and offers good value. Although recommendation algorithms already exist, they frequently overlook the various aspects that buyers take into account before making a phone purchase. Furthermore, recommendation systems are now widely used tools for using huge data and customising suggestions according to user preferences.</p><p>OBJECTIVES: Machine learning techniques like content-based filtering and collaborative filtering have demonstrated promising outcomes among the different methodologies proposed for constructing these kinds of systems. A hybrid recommendation system that combines the benefits of collaborative filtering with content-based filtering is presented in this paper for mobile phone choosing. The suggested method intends to deliver more precise and customised recommendations by utilising user behaviour patterns and mobile phone content properties.</p><p>METHODS: The system makes better recommendations by analysing user preferences and phone similarities using the aforementioned machine learning methods. The technology that has been built exhibits its capability to aid customers in selecting a mobile phone with knowledge.</p><p>RESULTS: With the effective Hybridization process we have obtained the best possible scores of MSE, MAE and RMSE.</p><p>CONCLUSION: To sum up, the growing intricacy of the mobile phone industry and the abundance of options have demanded the creation of increasingly advanced recommendation systems. This work presents a hybrid recommendation system that efficiently blends collaborative and content-based filtering techniques to provide users with more tailored, superior recommendations. This approach has the ability to enable customers to choose the best mobile phone for their needs by taking into account both user behaviour and mobile phone characteristics.</p>B V ChandrahaasBhawani Sankar PanigrahiSagar Dhanraj PandeNirmal Keshari Swain
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4594Enhancing Arabic E-Commerce Review Sentiment Analysis Using a hybrid Deep Learning Model and FastText word embedding
https://publications.eai.eu/index.php/IoT/article/view/4601
<p class="ICST-abstracttext"><span lang="EN-GB">The usage of NLP is shown in sentiment analysis (SA). SA extracts textual views. Arabic SA is challenging because of ambiguity, dialects, morphological variation, and the need for more resources available. The application of convolutional neural networks to Arabic SA has shown to be successful. Hybrid models improve single deep learning models. By layering many deep learning ensembles, earlier deep learning models should achieve higher accuracy. This research successfully predicted Arabic sentiment using CNN, LSTM, GRU, BiGRU, BiLSTM, CNN-BiGRU, CNN-GRU, CNN-LSTM, and CNN-biLSTM. Two enormous datasets, including the HARD and BRAD datasets, are used to evaluate the effectiveness of the proposed model. The findings demonstrated that the provided model could interpret the feelings conveyed in Arabic. The proposed procedure kicks off with the extraction of Arabert model features. After that, we developed and trained nine deep-learning models, including CNN, LSTM, GRU, BiGRU, BiLSTM, CNN-BiGRU, CNN-GRU, CNN-LSTM, and CNN-biLSTM. Concatenating the FastText and GLOVE as word embedding models. By a margin of 0.9112, our technique surpassed both standard forms of deep learning.</span></p>Nouri HichamHabbat NasseraSabri Karim
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-142023-12-141010.4108/eetiot.4601Identification and Categorization of Yellow Rust Infection in Wheat through Deep Learning Techniques
https://publications.eai.eu/index.php/IoT/article/view/4603
<p>The global wheat industry faces significant challenges due to yellow rust disease, This is induced by fungus Puccinia striiformis, as it leads to substantial crop losses and economic impacts. Timely detection and classification of the disease are essential for its effective management and control. In this study, we investigate the potential of DL and ML techniques for detecting and classifying yellow rust disease in wheat. We utilize three state-of-the-art CNN models, namely ResNet50, DenseNet121, and VGG19, to analyze wheat leaf images and extract relevant features. These models were developed and refined using a large dataset of annotated wheat photos. Encompassing both healthy plants and those affected by yellow rust disease. Furthermore, we examine the effectiveness of data augmentation and transfer learning in enhancing classification performance. Our findings reveal that the DL-based CNN models surpass traditional machine learning techniques in detecting and classifying yellow rust disease in wheat. Among the tested CNN models, EfficientNetB3 demonstrates the best performance, emphasizing its suitability for large-scale and real-time monitoring of wheat crops. This research contributes to the development of precision agriculture tools, laying the groundwork for prompt intervention and management of yellow rust disease, ultimately minimizing yield loss and economic impact on wheat production.</p>Mamatha MandavaSurendra Reddy VintaHritwik GhoshIrfan Sadiq Rahat
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-142023-12-141010.4108/eetiot.4603Sentence Fusion using Deep Learning
https://publications.eai.eu/index.php/IoT/article/view/4605
<p>The human process of document summarization involves summarizing a document by sentence fusion. Sentence fusion combines two or more sentences to create an abstract sentence. Sentence fusion is useful to convert an extractive summary to an abstractive summary. The extractive summary contains a set of salient sentences selected from a single document or multiple related documents. Redundancy creates problems while creating an extractive summary because it contains sentences whose segments or phrases are redundant. Sentence fusion helps to remove redundancy by fusing sentences into a single abstract sentence. This moves an extractive summary to an abstractive summary. In this paper, we present an approach that uses a deep learning model for sentence fusion. which is trained over a large dataset. We have tested our approach through both manual evaluation and system evaluation. The result of our proposed approach shows that our model is good enough to fuse sentences effectively.</p>Sohini Roy ChowdhuryKamal Sarkar
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-142023-12-141010.4108/eetiot.4605Prioritizing IoT-driven Sustainability Initiatives in Retail Chains: Exploring Case Studies and Industry Insights
https://publications.eai.eu/index.php/IoT/article/view/4628
<p>INTRODUCTION: Prioritizing sustainability initiatives is crucial for retail chains as they integrate Internet of Things (IoT) technologies to drive environmental responsibility. Retail chains have responsibility to establish environmental stewardship when they globally expand in terms of operations, supply chain and offerings. By prioritizing the initiatives retail chains can reduce impacts on environment, resource waster and mitigate risks related to that with the help of concepts like IoT.</p><p>OBJECTIVES: This paper aims to explore how IoT can aid in sustainable practices, mitigate risks, and enhance efficiency while addressing challenges, ultimately providing insights for retail chains to prioritize sustainability in the IoT context.</p><p>METHODS: The research employs a qualitative approach, focusing on in-depth case studies and analysis of industry reports and literature to explore IoT-driven sustainability initiatives in retail chains. It includes a diverse sample of retail chains, such as supermarkets and fashion retail, selected based on data availability related to their use of IoT for sustainability. The study involves descriptive analysis to present an overview of these initiatives and competitive analysis to identify sustainability leaders and areas for improvement. However, limitations include potential data availability issues and reliance on publicly available sources, with findings reflecting data up to the 2018-2021 timeframe.</p><p>RESULTS: The results highlight significant sustainability benefits achieved through IoT integration in various retail chain types. Case studies, such as Sainsbury's and Coca-Cola, demonstrate waste reduction and sustainable practices. Examples from Nordstrom and 7-Eleven showcase energy efficiency improvements. The versatility of IoT technologies across supermarkets, department stores, and convenience stores emphasizes the transformative power of IoT in driving sustainability in the retail industry. The study proposes a prioritization approach, considering key metrics and leveraging frameworks like the Triple Bottom Line, Life Cycle Sustainability Assessment, and Sustainability Framework for effective decision-making and goal alignment in IoT-driven sustainability initiatives.</p><p>CONCLUSION: In conclusion, this paper highlights the substantial potential of prioritizing IoT-driven sustainability initiatives in retail chains for positive environmental, social, and economic outcomes. Through case studies, the diverse applications of IoT, such as food waste reduction and energy-efficient lighting, demonstrate tangible benefits. The trend towards sustainable sourcing and materials is evident across various retail chain types. The discussion underscores the need for a systematic approach, utilizing frameworks like the Triple Bottom Line, to align with strategic objectives and optimize resources.</p>Krishnan Siva KarthikeyanT. Nagaprakash
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-182023-12-181010.4108/eetiot.4628Secured and Sustainable Supply Chain Management Using Key Escrow Encryption Technique in Blockchain Mechanism
https://publications.eai.eu/index.php/IoT/article/view/4630
<p>INTRODUCTION: Supply chain management is the management process of the flow of goods, and services related to financial functionalities, procurement of raw materials delivery to the final destination.</p><p>OBJECTIVES: Since the traditional supply chain process lacks data visibility, trustworthiness, and distributed ledger, the need for the blockchain mechanism to ensure the time-stamped transactions to provide a secured supply chain process has been introduced and integrated.</p><p>METHODS: The distributed nature of the blockchain helps in organizing the supply chain and engaging the customers with real, verifiable, and immutable data. Blockchain technology enables these transactions to be tracked in a very secure and transparent manner. In this paper, we, therefore, propose a framework that utilizes blockchain and key Escrow encryption systems to optimize the security of supply chains to improve services for global business survivability.</p><p>RESULTS: The comparative analysis with the existing benchmarking techniques with respect to the key size, key generation time, and key distribution time was carried out with the proposed model and found that proposed work provides better results.</p><p>CONCLUSION: This proposed system can track the authenticity of the product and details about the manufacturer of that particular product. Thus, the paper concludes the proposed work enhances data’s integrity, traceability, and availability and single-point failure can be resolved or reduced using blockchain mechanism.</p>A. AnithaM. PriyaM. K. NallakaruppanDeepa NatesanC. N. Kushagra JaiswalHarsh Kr Srivastava
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-182023-12-181010.4108/eetiot.4630Transformative Metamorphosis in Context to IoT in Education 4.0
https://publications.eai.eu/index.php/IoT/article/view/4636
<p class="ICST-abstracttext"><span lang="EN-GB" style="font-size: 9.0pt;">In the modern technology-driven era, it is important to consider a new model of education to keep pace with Industry 4.0. In view of this, the present paper critically explores the issues, discusses potential solutions along with a comprehensive analysis of the applications of technologies such as Internet of Things (IoT) in modern education specially Education 4.0, and examines the potential of these technologies to transform the education sector. The challenges faced by previous education models are analysed along with how they pave the way to the inclusion of IoT in education, leading to Education 4.0. The potential benefits of IoT in improving learning outcomes, enhancing student engagement and retention, and supporting teachers are also highlighted. In addition, it addresses the ethical and privacy concerns associated with the use of these technologies and suggests areas for future research.</span></p>Ashish Kumar BiswalDivya AvtaranVandana SharmaVeena GroverSushruta MishraAhmed Alkhayyat
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-192023-12-191010.4108/eetiot.4636Artificial Intelligence Shaping Talent Intelligence and Talent Acquisition for Smart Employee Management
https://publications.eai.eu/index.php/IoT/article/view/4642
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Due to the increased importance of artificial intelligence (AI) in talent intelligence (TI), and TI in talent acquisition (TA), this paper shows how AI improves the TI and consequently, the TA process of an organization.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The objectives of this paper are to understand the evolution of AI-driven TI concepts and explore the role of AI in TI.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: Primary and secondary data were used for research and analysis was performed using SPSS. Primary data was collected through a survey of 20 HR managers from 20 companies in Delhi-NCR. These 20 managers were selected through random sampling method and from them the data on role of AI in TI and TA was collected using a questionnaire. Secondary data through literature review was used to explore TI.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The paper not only brings out the role of AI in TI, but also elaborates the TI concept. From the survey data of HR managers, and secondary data, it is understood that AI contributes to TI of an organization, which helps in making TA more effective and efficient.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: AI enables better TI, which improves TA processes of an organization Thus, AI contributes to the TA of an organization.</span></p>Alka AgnihotriK. H. PavitraBalamurugan BalusamyAlka MauryaPratyush Bibhakar
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-192023-12-191010.4108/eetiot.4642Empowering Employee Wellness and Building Resilience in Demanding Work Settings Through Predictive Analytics
https://publications.eai.eu/index.php/IoT/article/view/4644
<p class="ICST-abstracttext"><span lang="EN-GB" style="font-size: 9.0pt;">In today's fast-paced and competitive corporate landscape, the well-being of employees is paramount for sustained success. This paper explores the transformative potential of predictive analytics in cultivating a healthier, more resilient workforce within high-pressure work environments. The title "Empowering Employee Wellness and Building Resilience in Demanding Work Settings Through Predictive Analytics" encapsulates our objective of harnessing data-driven insights to mitigate the negative effects of high-pressure work settings and foster an environment where employees thrive. Through an in-depth examination of predictive analytics tools and methodologies, this study offers a roadmap for organizations to proactively identify stressors, predict burnout risks, and implement targeted interventions. By collecting and analysing relevant data, employers can tailor support systems, optimize workloads, and implement mindfulness programs that enhance employee well-being. Moreover, organizations can better adapt to change, maintain workforce continuity, and drive productivity by fostering resilience through predictive insights. This research bridges the gap between data science and human resources, offering a holistic approach to employee wellness and resilience-building. By leveraging predictive analytics, companies can create a culture of care where employees feel supported, empowered, and more capable of surviving and thriving in high-pressure work environments.</span></p>Srishti DikshitYashika GroverPragati ShuklaAkhil MishraYash SahuChandan KumarMuskan Gupta
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-192023-12-191010.4108/eetiot.4644Merging Minds and Machines: The Role of Advancing AI in Robotics
https://publications.eai.eu/index.php/IoT/article/view/4658
<p>The relentless pursuit of creating intelligent robotic systems has led to a symbiotic relationship between human inventiveness and artificial intelligence (AI). Artificial intelligence is a theory. It is the development of computer systems that are able to perform tasks that would require human intelligence. This abstract explores the pivotal role that AI plays in advancing the capabilities and applications of robotic systems. The integration of AI algorithms and machine learning techniques has launched robotics beyond mere automation, enabling machines to modify, alter, adjust, learn, and interact with the world in ways previously deemed science fiction. Design fictions that vividly imagines future scenarios of AI or robotics in use offer a means both to explain and query the technological possibilities. Examples of these tasks are visual perception, speech recognition, decision-making, and translation between languages. The three key dimensions of AI’s role in robotics are Cognitive Augmentation, Human-Robot Collaboration, and Autonomous Intelligence. The abstract also discusses the societal implications of this AI-driven advancement in robotic systems, including ethical considerations, job market impacts, and the democratization of access to advanced technology. The convergence of human intellect and artificial intelligence in robotics marks a transformative era where machines become not just tools, but companions, collaborators, and cognitive extensions of human capabilities. Researchers are taking inspiration from the brain and considering alternative architectures in which networks of artificial neurons and synapses process information with high speed and adaptive learning capabilities in an energy-efficient, scalable manner. The indispensable role of AI in shaping the future of robotic systems and bridging the gap between human potential and machine capabilities is highlighted. The major impact of this synergy reverberates across industries, promising the world where robots become not just mechanical contraptions / defective apparatus but intelligent partners in our journey of progress.</p>Nishtha PrakashAreeba AtiqMohammad ShahidJyoti RaniSrishti Dikshit
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-202023-12-201010.4108/eetiot.4658Employee Attrition: Analysis of Data Driven Models
https://publications.eai.eu/index.php/IoT/article/view/4762
<p class="ICST-abstracttext"><span lang="EN-GB">Companies constantly strive to retain their professional employees to minimize the expenses associated with recruiting and training new staff members. Accurately anticipating whether a particular employee is likely to leave or remain with the company can empower the organization to take proactive measures. Unlike physical systems, human resource challenges cannot be encapsulated by precise scientific or analytical formulas. Consequently, machine learning techniques emerge as the most effective tools for addressing this objective. In this paper, we present a comprehensive approach for predicting employee attrition using machine learning, ensemble techniques, and deep learning, applied to the IBM Watson dataset. We employed a diverse set of classifiers, including Logistic regression classifier, K-nearest neighbour (KNN), Decision Tree, Naïve Bayes, Gradient boosting, AdaBoost, Random Forest, Stacking, XG Boost, “FNN (Feedforward Neural Network)”, and “CNN (Convolutional Neural Network)” on the dataset. Our most successful model, which harnesses a deep learning technique known as FNN, achieved superior predictive performance with highest Accuracy, recall and F1-score of 97.5%, 83.93% and 91.26%.</span></p>Manju NandalVeena GroverDivya SahuMahima Dogra
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-032024-01-031010.4108/eetiot.4762Proposed hybrid Model in Online Education
https://publications.eai.eu/index.php/IoT/article/view/4770
<p class="ICST-abstracttext" style="margin-left: 0in;"><span lang="EN-GB">The advancement of technology powering e-learning has brought numerous benefits, including consistency, scalability, cost reduction, and improved usability. However, there are also challenges that need to be addressed. Here are some key considerations for enhancing the technology powering e-learning. Artificial intelligence has revolutionized the field of e-learning and created tremendous opportunities for education Storage, servers, software systems, databases, online management systems, and apps are examples of such resources. This paper aims to forecast students' adaptability to online education using predictive machine learning (ML) models, including Logistic Regression, Decision tree, Random Forest, AdaBoost, ANN. The dataset utilized for this study was sourced from Kaggle and comprised 1205 high school to college students. The research encompasses several stages of data analysis, including data preprocessing, model training, testing, and validation. Multiple performance metrics such as accuracy, specificity, sensitivity, F1 score, and precision were employed to assess the effectiveness of each model. The findings demonstrate that all five models exhibit considerable predictive capabilities. Notably, decision tree and hybrid models outperformed the others, achieving an impressive accuracy rate of 92%. Consequently, it is recommended to utilize these two models, RF and XGB, for predicting students' adaptability levels in online education due to their superior predictive accuracy. Additionally, the Logistic regression, KNN, and AdaBoost, ANN models also yielded respectable performance levels, achieving accuracy rates of 77.48%, 83.77 ,74.17% and 91.06% respectively. In summary, this study underscores the superiority of RF and XGB models in delivering higher prediction accuracy, aligning with similar research endeavours employing ML techniques to forecast adaptability <span style="letter-spacing: -.1pt;">levels.</span></span></p>Veena GroverManju NandalBalamurugan BalusamyDivya SahuMahima Dogra
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-042024-01-041010.4108/eetiot.4770SMART REPELLER: A Smart system to prevent Rhesus Macaque Trespassing in Human Settlements and Agricultural Areas
https://publications.eai.eu/index.php/IoT/article/view/4809
<p class="ICST-abstracttext"><span lang="EN-GB">Rhesus macaque trespassing is a widespread problem where wild Rhesus macaque monkeys enter human settlements and agricultural areas, causing various issues such as property damage, food theft, and health risks to humans. These primates also cause significant economic losses by raiding crops, damaging plants, and disrupting the natural balance of the ecosystem. To address this problem, a research paper proposes a technology-based solution called Smart Repeller, which uses ultrasonic sound waves and Calcium Carbide Cannon, along with computer vision technology and artificial intelligence to detect the presence of monkeys and activate repelling mechanisms automatically. The proposed device eliminates the need for human intervention, making it efficient and cost-effective. Our paper aims to demonstrate the feasibility and effectiveness of the proposed device through experimental studies and simulations, with the ultimate goal of providing a practical and scalable solution to mitigate the problem of Rhesus macaque trespassing in human settlements and agricultural areas.</span></p>Radha RBalaji GAnita XMridhula N
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-102024-01-101010.4108/eetiot.4809Development and Simulation Two Wireless Hosts Communication Network Using Omnnet++
https://publications.eai.eu/index.php/IoT/article/view/4812
<p class="ICST-abstracttext"><span lang="EN-GB">A wireless network is a collection of computers and other electronic devices that exchange information by means of radio waves. Endpoint computing devices can all be connected without the need for hardwired data cabling thanks to the prevalence of wireless networks in today's businesses and networks. This paper's aim is to create and construct a wireless network model for connecting two hosts which will be implemented to simulate wireless communications. The sending of User Datagram Protocol (UPD) data by one of the hosts to the other one has been wirelessly specified by the simulator. Additionally, the protocol models were kept as simple as possible including both the physical layer and the lower layer. The architecture and functionality of a new simulator is showed its ability to solve the issues of making a host move, especially, when it gets out of the range the simulation ends.</span></p>Morched Derbali
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-102024-01-101010.4108/eetiot.4812Crop Growth Prediction using Ensemble KNN-LR Model
https://publications.eai.eu/index.php/IoT/article/view/4814
<p>Research in agriculture is expanding. Agriculture in particular relies heavily on earth and environmental factors, such as temperature, humidity, and rainfall, to forecast crops. Crop prediction is a crucial problem in agriculture, and machine learning is an emerging study area in this area. Any grower is curious to know how much of a harvest he can anticipate. In the past, producers had control over the selection of the product to be grown, the monitoring of its development, and the timing of its harvest. Today, however, the agricultural community finds it challenging to carry on because of the sudden shifts in the climate. As a result, machine learning techniques have increasingly replaced traditional prediction methods. These techniques have been employed in this research to determine crop production. It is critical to use effective feature selection techniques to transform the raw data into a dataset that is machine learning compatible in order to guarantee that a particular machine learning (ML) model operates with a high degree of accuracy. The accuracy of the model will increase by reducing redundant data and using only data characteristics that are highly pertinent in determining the model's final output. In order to guarantee that only the most important characteristics are included in the model, it is necessary to use optimal feature selection. Our model will become overly complex if we combine every characteristic from the raw data without first examining their function in the model-building process. Additionally, the time and area complexity of the Machine learning model will grow with the inclusion of new characteristics that have little impact on the model's performance. The findings show that compared to the current classification method, an ensemble technique provides higher prediction accuracy.</p>Attaluri HarshithaBeebi NaseebaNarendra Kumar RaoAbbaraju Sai SathwikNagendra Panini Challa
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-102024-01-101010.4108/eetiot.4814Exploring the Impact of Software as a Service (SaaS) on Human Life
https://publications.eai.eu/index.php/IoT/article/view/4821
<p class="ICST-abstracttext"><span lang="EN-GB">Software as a Service (SaaS) has emerged as a pivotal aspect of modern business operations, fundamentally transforming how companies utilize IT resources and impacting firm performance. This research delves into the profound effects of SaaS on human life within the business sphere, focusing on its value proposition and methodologies for assessing its worth. The primary objectives of this paper are twofold: first, to evaluate the actual value of SaaS business applications concerning their purported benefits, particularly in terms of IT resource management and firm performance; second, to explore the means of quantifying the worth of SaaS business applications within organizational frameworks.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">This study utilizes techniques derived from social network analysis to investigate the impact of SaaS on human life in business. A comprehensive review of literature from various sources including papers, articles, newspapers, and books forms the basis for this exploratory research. Both primary and secondary data are employed to elucidate the multifaceted implications of SaaS adoption.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">The findings of this research underscore the profound influence of SaaS on a company's cost structure, return on IT investments, and digitalization of services. Cloud computing emerges as a cornerstone for the seamless integration of SaaS into daily business operations, offering expanded market opportunities and increased revenue streams. In conclusion, SaaS represents a transformative force in modern business landscapes, reshaping human interactions with technology, optimizing operational efficiency, and mitigating costs. Cloud-based SaaS models hold substantial promise for enhancing business agility and facilitating growth across diverse markets.</span></p>Mukul GuptaDeepa GuptaPriti Rai
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-112024-01-111010.4108/eetiot.4821IoT-Based Shoe for Enhanced Mobility and Safety of Visually Impaired Individuals
https://publications.eai.eu/index.php/IoT/article/view/4823
<p class="ICST-abstracttext"><span lang="EN-GB"> </span></p><p class="ICST-abstracttext"><span lang="EN-GB">This research paper presents the design, development, and evaluation of an Internet of Things (IoT)- based shoe system to enhance the mobility and safety of visually impaired individuals. The proposed shoe leverages IoT technologies, embedded sensors, and wireless communication to provide real-time information and assistance to blind individuals during their daily activities. The system encompasses a wearable shoe device equipped with sensors, a microcontroller unit, and a companion mobile application that relays important data and alerts the user. The effectiveness of the IoT-based shoe is evaluated through a series of user tests and feedback surveys. The results demonstrate the potential of this innovative solution to empower blind individuals, improve their independence, and promote a safer environment for their navigation.</span></p>Bakshish SinghPongkit EkvitayavetchanukuBharti ShahNeeraj SirohiPrachi Pundhir
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-112024-01-111010.4108/eetiot.4823Enhancing Agricultural Sustainability with Deep Learning: A Case Study of Cauliflower Disease Classification
https://publications.eai.eu/index.php/IoT/article/view/4834
<p>The pivotal role of sustainable agriculture in ensuring food security and nurturing healthy farming communities is undeniable. Among the numerous challenges encountered in this domain, one key hurdle is the early detection and effective treatment of diseases impacting crops, specifically cauliflower.This research provides an in-depth exploration of the use of advanced DL algorithms to perform efficient identification and classification of cauliflower diseases. The study employed and scrutinized four leading DL models: EfficientNetB3, DenseNet121, VGG19 CNN, and ResNet50, assessing their capabilities based on the accuracy of disease detection.The investigation revealed a standout performer, the EfficientNetB3 model, which demonstrated an exceptional accuracy rate of 98%. The remaining models also displayed commendable performance, with DenseNet121 and VGG19 CNN attaining accuracy rates of 81% and 84%, respectively, while ResNet50 trailed at 78%. The noteworthy performance of the EfficientNetB3 model is indicative of its vast potential to contribute to agricultural sustainability. Its ability to detect and classify cauliflower diseases accurately and promptly allows for early interventions, reducing the risk of extensive crop damage.This study contributes valuable insights to the expanding field of DL applications in agriculture. These findings are expected to guide the development of advanced agricultural monitoring systems and decision-support tools, ultimately fostering a more sustainable and productive agricultural landscape.</p>Nihar Ranjan PradhanHritwik GhoshIrfan Sadiq RahatJanjhyam Venkata Naga RameshMannava Yesubabu
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-122024-01-121010.4108/eetiot.4834Internet of Things and Health: A literature review based on Mixed Method
https://publications.eai.eu/index.php/IoT/article/view/4909
<p class="ICST-abstracttext"><span lang="EN-GB">The integration of technological advances into health sciences has promoted their development, but also generated setbacks and difficulties for digital transformation. In different areas, technology has modified the processes of diagnosis, teaching and learning, treatment and monitoring, which is why the study of new technologies and the models that support their introduction is essential. Internet of Things is one of these models, which, in turn, includes different models, devices and applications. Due to its breadth of exploitation options and benefits, in the health area this concept has been adopted and particularized as the Internet of Medical Things. With the purpose of achieving an approximation to the main trends and characteristics, a literature review study was conducted, based on mixed methods. Two studies were carried out with a sequential strategy, the first being bibliometric and the second a scoping review. The main results allowed us to describe the main trends in terms of bibliometric indicators, a thematic analysis in terms of areas, populations, benefits and limitations. It is concluded that there is a need for new interdisciplinary studies and lines for future research are presented.</span></p>Ana Maria Chaves CanoVerenice Sánchez CastilloAlfredo Javier Pérez GamboaWilliam Castillo-GonzalezAdrián Alejandro Vitón-CastilloJavier Gonzalez-Argote
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-192024-01-191010.4108/eetiot.4909An Accurate Plant Disease Detection Technique Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/4963
<p>INTRODUCTION: Plant diseases pose a significant threat to agriculture, causing substantial crop and financial losses. Modern technologies enable precise monitoring of plant health and early disease identification. Employing image processing, particularly Convolutional Neural Network (CNN) techniques, allows accurate prediction of plant diseases. The aim is to provide an automated, reliable disease detection system, aiding professionals and farmers in timely action to prevent infections and reduce crop losses. Integrating cutting-edge technologies in agriculture holds vast potential to enhance profitability and production.</p><p>OBJECTIVES: The primary focus lies in developing an automated system proficient in analysing plant images to detect disease symptoms and classify plants as healthy or disease affected. The system aims to simplify plant disease diagnostics for farmers, providing essential information about leaf name, integrity, and life span.</p><p>METHODS: The method aims to empower farmers by enabling easy identification of plant diseases, providing essential details like disease name, accuracy level, and life span. The CNN model accurately gauges the system's accuracy level. It further streamlines the process by offering a unified solution through a user-friendly web application, eliminating the need for separate interventions for affected leaves. the system saves farmers time by delivering crucial information directly. RESULTS: The Proposed web application proves to be a comprehensive solution, eliminating the need for farmers to search for separate interventions for affected leaves. The machine learning model exhibits a noteworthy accuracy of 96.67%, emphasizing its proficiency in making correct predictions for the given task.</p><p>CONCLUSION: In conclusion, the paper successfully employed a CNN algorithm for precise plant disease prediction. With the proposed model deployment, farmers can easily access information about plant diseases, their life span, and preventive measures through the web application. By detecting illnesses early, farmers can promptly take remedial actions to mitigate sicknesses and minimize crop losses. The integrated approach holds promise for advancing agricultural practices and ensuring sustainable crop management.</p>Sai Sharvesh RSuresh Kumar KC. J. Raman
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-292024-01-291010.4108/eetiot.4963An IoT Integrated Smart Prediction of Wild Animal Intrusion in Residential Areas Using Hybrid Deep Learning with Computer Vision
https://publications.eai.eu/index.php/IoT/article/view/4976
<p>INTRODUCTION: The conversion of forests into human lands causes the intrusion of wild animals into the residential area. There is a necessity to prevent the intrusion of such wild animals which causes damage to properties and harm or kill humans. Human population growth leads to an increase in the exploitation of forest areas and related resources for residential and other settlement purposes. There is a need for a system to detect the entry of such animals into habitats.</p><p>OBJECTIVES: This paper proposes that conversion of forests into human lands causes the intrusion of wild animals into the residential area.</p><p>METHODS: Deep learning technology combined with Internet of Things (IoT) devices can be deployed in the process of restricting the entry of wild animals into residential areas. The proposed system uses deep learning techniques with the use of various algorithms like DenseNet 201, ResNet50 and You Only Look Once (YOLO). These deep-learning algorithms predict wild animals through image classification. This is done using IoT devices placed in such areas. The role of IoT devices is to transmit the computer vision images to the deep learning module, receive the output, and alarm the residents of the area.</p><p>RESULTS: The main results are implementation prediction of animals for image processing Datasets used for the prediction and classification indulge the use of cloud modules. It stores the dataset for the prediction process and transfers it whenever needed. As the proposed system is a hybrid model that uses more than one algorithm, the accuracy obtained from the prediction for DenseNet 201, ResNet50 and You Only Look Once (YOLO) algorithm is 82%,92%, and 98%.</p><p>CONCLUSION: The prediction of those animals is done by a deep learning model which comprises three algorithms are DenseNet 201, ResNet50 and YOLOv3. Comparing the accuracy of an algorithm with higher accuracy is considered efficient and accurate.</p>Senthil G. A.R PrabhaN AishwaryaR M AshaS Prabu
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-302024-01-301010.4108/eetiot.4976Assessment of Zero-Day Vulnerability using Machine Learning Approach
https://publications.eai.eu/index.php/IoT/article/view/4978
<p>Organisations and people are seriously threatened by zero-day vulnerabilities because they may be utilised by attackers to infiltrate systems and steal private data. Currently, Machine Learning (ML) techniques are crucial for finding zero-day vulnerabilities since they can analyse huge datasets and find patterns that can point to a vulnerability. This research’s goal is to provide a reliable technique for detecting intruders and zero-day vulnerabilities in software systems. The suggested method employs a Deep Learning (DL) model and an auto-encoder model to find unusual data patterns. Additionally, a model for outlier detection that contrasts the autoencoder model with the single class-based Support Vector Machine (SVM) technique will be developed. The dataset of known vulnerabilities and intrusion attempts will be used to train and assess the models.</p>SakthiMurugan SSanjay Kumaar AVishnu VigneshSanthi P
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-302024-01-301010.4108/eetiot.4978Consolidation Coefficient of Soil Prediction by Using Teaching Learning based Optimization with Fuzzy Neural Network
https://publications.eai.eu/index.php/IoT/article/view/4990
<p>A key factor in constructing buildings leaning on soft soil is the consolidating coefficient of the soil referred as Cv. It is a crucial lab-measured engineering parameter utilized during the design and verification of geotechnical structures. Nevertheless, experimental experiments take a lot of time and money. In this study, the is projected using Fuzzy Neural Network (FNN) with optimized feature selection using Teaching Learning-based Optimization, estimating Cv as the most crucial step (TLO), which has enhanced the quality of the prediction model by removing unnecessary characteristics and relying solely on crucial ones. The experimental results demonstrate that the projected FNN, followed by the Multi-layer Training algorithm Neural Network (MLP), Impact of changing Optimization (BBO), a support vector regression (SVR), Back - propagation algorithm Multi-layer Training algorithm Bayesian Network (Bp-MLP Neural Nets), has the highest predictive validity for the prediction of (Root Mean Squared Error (RMSE )= 0.379, Mean Absolute Error (MAE) = 0.26, and coefficient of determination r = 0.835). Hence, it can be said that even if all used models perform well in predicting the soil consolidation coefficient, the FNN-TLO performs the best.</p>K KalaivaniD Mohana PriyaK VeenaK BrindhaK KaruppasamyK R Shanmugapriyaa
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-312024-01-311010.4108/eetiot.4990I-CVSSDM: IoT Enabled Computer Vision Safety System for Disaster Management
https://publications.eai.eu/index.php/IoT/article/view/5046
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Around the world, individuals experience flooding more frequently than any other natural calamity.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The motivation behind this research is to provide an Internet of Things (IoT)-based early warning assistive system to enable monitoring of water logging levels in flood-affected areas. Further, the SSD-MobiNET V2 model is used in the developed system to detect and classify the objects that prevail in the flood zone.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: The developed research is validated in a real-time scenario. To enable this, a customized embedded module is designed and developed using the Raspberry Pi 4 model B processor. The module uses (i) a pi-camera to capture the objects and (ii) an ultrasonic sensor to measure the water level in the flood area.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The measured data and detected objects are periodically ported to the cloud and stored in the cloud database to enable remote monitoring and further processing.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: Also, whenever the level of waterlogged exceeds the threshold, an alert is sent to the concerned authorities in the form of an SMS, a phone call, or an email.</span></p>Parameswaran RameshVidhya NPanjavarnam BShabana Parveen MDeepak Athipan A M BBhuvaneswari P T V
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-062024-02-061010.4108/eetiot.5046Machine Learning based Disease and Pest detection in Agricultural Crops
https://publications.eai.eu/index.php/IoT/article/view/5049
<p>INTRODUCTION: Most Indians rely on agricultural work as their primary means of support, making it an essential part of the country’s economy. Disasters and the expected loss of farmland by 2050 as a result of global population expansion raise concerns about food security in that year and beyond. The Internet of Things (IoT), Big Data and Analytics are all examples of smart agricultural technologies that can help the farmers enhance their operation and make better decisions.</p><p>OBJECTIVES: In this paper, machine learning based system has been developed for solving the problem of crop disease and pest prediction, focussing on the chilli crop as a case study.</p><p>METHODS: The performance of the suggested system has been assessed by employing performance metrics like accuracy, Mean Squared Error (MSE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE).</p><p>RESULTS: The experimental results reveals that the proposed method obtained accuracy of 0.90, MSE of 0.37, MAE of 0.15, RMSE of 0.61</p><p>CONCLUSION: This model will predict pests and diseases and notify farmers using a combination of the Random Forest Classifier, the Ada Boost Classifier, the K Nearest Neighbour, and Logistic Regression. Random Forest is the most accurate model.</p>Balasubramaniam SSandra Grace NelsonArishma MAnjali S RajanSatheesh Kumar K
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-062024-02-061010.4108/eetiot.5049Traffic sign recognition using CNN and Res-Net
https://publications.eai.eu/index.php/IoT/article/view/5098
<p> </p><p>In the realm of contemporary applications and everyday life, the significance of object recognition and classification cannot be overstated. A multitude of valuable domains, including G-lens technology, cancer prediction, Optical Character Recognition (OCR), Face Recognition, and more, heavily rely on the efficacy of image identification algorithms. Among these, Convolutional Neural Networks (CNN) have emerged as a cutting-edge technique that excels in its aptitude for feature extraction, offering pragmatic solutions to a diverse array of object recognition challenges. CNN's notable strength is underscored by its swifter execution, rendering it particularly advantageous for real-time processing. The domain of traffic sign recognition holds profound importance, especially in the development of practical applications like autonomous driving for vehicles such as Tesla, as well as in the realm of traffic surveillance. In this research endeavour, the focus was directed towards the Belgium Traffic Signs Dataset (BTS), an encompassing repository comprising a total of 62 distinct traffic signs. By employing a CNN model, a meticulously methodical approach was obtained commencing with a rigorous phase of data pre-processing. This preparatory stage was complemented by the strategic incorporation of residual blocks during model training, thereby enhancing the network's ability to glean intricate features from traffic sign images. Notably, our proposed methodology yielded a commendable accuracy rate of 94.25%, demonstrating the system's robust and proficient recognition capabilities. The distinctive prowess of our methodology shines through its substantial improvements in specific parameters compared to pre-existing techniques. Our approach thrives in terms of accuracy, capitalizing on CNN's rapid execution speed, and offering an efficient means of feature extraction. By effectively training on a diverse dataset encompassing 62 varied traffic signs, our model showcases a promising potential for real-world applications. The overarching analysis highlights the efficacy of our proposed technique, reaffirming its potency in achieving precise traffic sign recognition and positioning it as a viable solution for real-time scenarios and autonomous systems.</p>J Cruz AntonyG M Karpura DheepanVeena KVellanki VikasVuppala Satyamitra
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-122024-02-121010.4108/eetiot.5098An internet of things based smart agriculture monitoring system using convolution neural network algorithm
https://publications.eai.eu/index.php/IoT/article/view/5105
<p>Farming is a crucial vocation for survival on this planet because it meets the majority of people's necessities to live. However, as technology developed and the Internet of Things was created, automation (smarter technologies) began to replace old approaches, leading to a broad improvement in all fields. Currently in an automated condition where newer, smarter technologies are being upgraded daily throughout a wide range of industries, including smart homes, waste management, automobiles, industries, farming, health, grids, and more. Farmers go through significant losses as a result of the regular crop destruction caused by local animals like buffaloes, cows, goats, elephants, and others. To protect their fields, farmers have been using animal traps or electric fences. Both animals and humans perish as a result of these countless deaths. Many individuals are giving up farming because of the serious harm that animals inflict on crops. The systems now in use make it challenging to identify the animal species. Consequently, animal detection is made simple and effective by employing the Artificial Intelligence based Convolution Neural Network method. The concept of playing animal-specific sounds is by far the most accurate execution. Rotating cameras are put to good use. The percentage of animals detected by this technique has grown from 55% to 79%.</p>Balamurugan K SChinmaya Kumar PradhanVenkateswarlu A NHarini GGeetha P
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-132024-02-131010.4108/eetiot.5105Circumventing Stragglers and Staleness in Distributed CNN using LSTM
https://publications.eai.eu/index.php/IoT/article/view/5119
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Using neural networks for these inherently distributed applications is challenging and time-consuming. There is a crucial need for a framework that supports a distributed deep neural network to yield accurate results at an accelerated time.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: In the proposed framework, any experienced novice user can utilize and execute the neural network models in a distributed manner with the automated hyperparameter tuning feature. In addition, the proposed framework is provided in AWS Sage maker for scaling the distribution and achieving exascale FLOPS. We benchmarked the framework performance by applying it to a medical dataset. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The maximum performance is achieved with a speedup of 6.59 in 5 nodes. The model encourages expert/ novice neural network users to apply neural network models in the distributed platform and get enhanced results with accelerated training time. There has been a lot of research on how to improve the training time of Convolutional Neural Networks (CNNs) using distributed models, with a particular emphasis on automating the hyperparameter tweaking process. The study shows that training times may be decreased across the board by not just manually tweaking hyperparameters, but also by using L2 regularization, a dropout layer, and ConvLSTM for automatic hyperparameter modification.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: The proposed method improved the training speed for model-parallel setups by 1.4% and increased the speed for parallel data by 2.206%. Data-parallel execution achieved a high accuracy of 93.3825%, whereas model-parallel execution achieved a top accuracy of 89.59%.</span></p>Aswathy RavikumarHarini SriramanSaddikuti LokeshJitendra Sai
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-142024-02-141010.4108/eetiot.5119Deep Learning-Based Traffic Accident Prediction: An Investigative Study for Enhanced Road Safety
https://publications.eai.eu/index.php/IoT/article/view/5166
<p>INTRODUCTION: Traffic accidents cause enormous loss of life as well as property, which is a global concern. Effective accident prediction is essential for raising road safety and reducing the effects of accidents. To increase traffic safety, a deep learning-based technique for predicting accidents was developed in this research study.</p><p>OBJECTIVES: It gathers a large amount of data on elements including weather, road features, volume of traffic, and past accident reports. The dataset goes through pre-processing, such as normalization, to ensure that the scales of the input characteristics are uniform. Normalizing the gathered dataset ensures consistent scaling for the input features during the data processing step. This process enables efficient model training and precise forecasting. In order to track and examine the movement patterns of automobiles, people, and other relevant entities, real-time tracking and monitoring technologies, such as the deep sort algorithm, are also employed.</p><p>METHODS: The model develops a thorough grasp of the traffic situation by incorporating this tracking data with the dataset. Convolutional Neural Networks (CNN), in particular, are utilized in this research for feature extraction and prediction. CNNs capture crucial road characteristics by extracting spatial features from images or spatial data. With its insights into improved road safety, this study advances the prediction of traffic accidents.</p><p>RESULTS: A safer transport infrastructure could result from the developed deep learning-based strategy, which has the potential to enable pre-emptive interventions, enhance traffic management, and eventually reduce the frequency and severity of traffic accidents.</p><p>CONCLUSION: The proposed CNN demonstrates superior accuracy when compared to the existing method.</p>Girija MDivya V
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-212024-02-211010.4108/eetiot.5166Gaming using different hand gestures using artificial neural network
https://publications.eai.eu/index.php/IoT/article/view/5169
<p>INTRODUCTION: Gaming has evolved over the years, and one of the exciting developments in the industry is the integration of hand gesture recognition.</p><p>OBJECTIVES: This paper proposes gaming using different hand gestures using Artificial Neural Networks which allows players to interact with games using natural hand movements, providing a more immersive and intuitive gaming experience.</p><p>METHODS: Introduces two modules: recognition and analysis of gestures. The gesture recognition module identifies the gestures, and the analysis module assesses them to execute game controls based on the calculated analysis.</p><p>RESULTS: The main results obtained in this paper are enhanced accessibility, higher accuracy and improved performance.</p><p>CONCLUSION: To communicate with any of the traditional systems, physical contact is necessary. In the hand gesture recognition system, the same functionality can be interpreted by gestures without requiring physical contact with the interfaced devices.</p>Prema SG DeenaHemalatha DAruna K BHashini S
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-212024-02-211010.4108/eetiot.5169Preventing Double Spending Attacks through Crow Search Algorithm to Enhance E-Voting System Security
https://publications.eai.eu/index.php/IoT/article/view/5208
<p>Electronic voting system is the process of polling votes and counting votes. In most of the countries voting may now be done electronically, there are still several difficulties involved, including the expense of paper, how ballots are organized, the possibility of varying results when tallying the votes, and others. Duplicate votes pose a significant concern as they can be fraudulently cast by individuals. To focus on this issue, Distributed Ledger Technology (DLT) is employed to enhance the voting procedure in a secured manner. A directed acyclic graph is used by the Internet of Things Application (IOTA), a promising distributed ledger system. Faster transaction confirmation, high scalability and zero transaction fees are achieved via the Directed Acyclic Graph structure. In both IOTA tangle and blockchain technology, the public cast duplicate votes. The unauthorized user can create duplicate votes in the blockchain as well as IOTA tangle. This can be focused in this proposed method. The double spending problem can be solved by using Crow Search Algorithm (CSA). This Optimization problem produces an improved result for resolving double spending in e-voting systems.</p>S MuthulakshmiA Kannammal
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-262024-02-261010.4108/eetiot.5208Examining the Influence of Security Perception on Customer Satisfaction: A Quantitative Survey in Vietnam
https://publications.eai.eu/index.php/IoT/article/view/5210
<p class="ICST-abstracttext"><span lang="EN-GB"> </span></p><p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: This study explores the elements that impact consumer satisfaction in e-commerce settings, focusing on the perception of security. The prominence of e-commerce highlights the necessity of understanding customer satisfaction determinants, emphasizing the importance of creating a secure e-commerce environment.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: Four hypotheses focused on security perception, customer service, product information, and website design affecting customer satisfaction were established and tested. A sample of Vietnamese consumers was utilized to examine these relationships empirically.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: This study employed a quantitative research approach. The multiple linear regression analysis was used to test the research hypothesis. The SPSS (IBM) Version 26 software was used for statistical data treatment.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The results revealed that security perception, customer service, and product information significantly influenced customer satisfaction, whereas website design did not. Notably, security perception emerged as a critical determinant of customer satisfaction. The outcomes of this study augment the existing scholarly resources, offering substantiated data concerning the significance of security perceptions in influencing customer gratification.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: Practical implications for online retailers include prioritizing enhancing security features, improving customer service, and providing comprehensive product information. However, this study may restrict the generalizability of the results, highlighting the need for additional research in various circumstances.</span></p>Ta Thi Nguyet TrangPham Chien ThangTran Quang Quy
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-262024-02-261010.4108/eetiot.5210Application of augmented reality in automotive industry
https://publications.eai.eu/index.php/IoT/article/view/5223
<p class="ICST-abstracttext"><span lang="EN-GB">Introduction: Augmented reality is defined as a direct or indirect vision of a physically real environment, parts of which they are enriched with additional digital information relevant to the object that is being looked at. In the field of engineering design, there is a wide range of industries that use this technology, such as automotive, aircraft manufacturing, electronics, engineering; so that it has gained popularity in assembly, maintenance and inspection tasks. The objective was to characterize the use of augmented reality in the automotive industry.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">Methods: a total of 20 articles in Spanish and English were reviewed, from Scopus, Science and Dialnet; Using as keywords: augmented reality, automotive industry, manufacturing, being more than 50 % of the last five years.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">Result: its main advantage is considered its potential as an interactive and intuitive interface. It promises to provide the correct information to the human operator at the right time and place. If it is considered an ideal environment in which the RA is applied safely, in adequate balance between automated processes and human control over them; The level of production and its quality will be positively affected.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">Conclusions: Augmented reality is applied in areas of the automotive industry such as logistics design, assembly, maintenance, evaluation, diagnosis, repair, inspection, quality control, instruction and marketing; in order to guarantee better work performance, productivity and efficiency, mainly mediated by portable devices. Its degree of acceptance, although growing, is not yet clear.</span></p>Denis Gonzalez-ArgoteAdrián Alejandro Vitón-CastilloJavier Gonzalez-Argote
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-272024-02-271010.4108/eetiot.5223Safety Wearable for Miners
https://publications.eai.eu/index.php/IoT/article/view/5261
<p class="ICST-abstracttext"><span lang="EN-GB"> </span></p><p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Mining is the process of extraction of valuable minerals, ores and other non-renewable resources from the Earth’s surface. The mining industry is known for its hazardous and highly risky working environment. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The mining industry is involved in the extraction of these geological materials, which is essential for the development of the country and its economy. However, this industry comes with its fair share of risks and dangers. Recent statistics show that around 100 miners fall victim to the harsh working conditions every year. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: Explosions due to Methane and coal dust followed by roof collapses, mine fires, gas outburst, blasting accidents, poisoning and suffocation are the major reasons out of these few of them causes deaths inside the mines.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: Even though many precautions are suggested, and measures have been taken to improve the safety of the miners and to improve the work environment, but mines are still unpredictable, and accidents are also recorded then and there. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: The existing safety technologies and measures have either failed to monitor multiple vital features that could lead to fatalities, or to provide adequate and appropriate rescue resources in time to help the miners in danger.</span></p>M RamyaG PuvaneswariR KalaivaniK Shesathri
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-012024-03-011010.4108/eetiot.5261CRUD Application Using ReactJS Hooks
https://publications.eai.eu/index.php/IoT/article/view/5298
<p class="ICST-abstracttext"><span lang="EN-GB">This Project is aimed to implement CRUD operations using the library called React.js which is the advanced version of the language Javascript. The question which pops out everyone’s mind now is why React? There’s lot of open – source framework/ library/ language that is available in today’s internet. The purpose of using react is anyone with the little knowledge of javascript can easily learn react and implementation of this library is user-friendly. Also, there’s many more packages react can easily get installed in NPM and run the application whatever changes in the code can do. React also delivers good UI for the developer and to create/develop new apps for future generation. There are more than 2000 developers and over one lakh sites making use of react.js among some of the most powerful are Instagram, Netflix, Whatsapp, Uber. It also integrates with any other javascript libraries or framework and speciality of react can make changes in the web browser without doing subsequent refresh of the pages. React can allow the developer to build any complex web application and more importantly mobile application. The CRUD is an acronym for Create, Read, Update and Delete which is very much essential for implementing any strong application with relational Database. Thus, the project results in Creating the user/users with their details, read the users task, updating if any new user is created through ‘Create’ operation, and finally delete the user/users list if not wanted for an application. Though the ‘crud’ functions for creating an application can be done in any language, the author specifically uses Javascript library called react.js which is one of the best platforms to create the single page user application interfaces. The overarching goal of this application is to bring easiness for any product development company in need of crud-user functions can stick to this.</span></p>Kanniga Parameshwari MSelvi KRekha MKarthiga R
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-042024-03-041010.4108/eetiot.5298Age Based Content Controlling System Using AI for Children
https://publications.eai.eu/index.php/IoT/article/view/5313
<p class="ICST-abstracttext"><span lang="EN-GB">Age detection has gotten a lot of attention in recent years because it is being used in more and more sectors. Regulations and norms imposed by the government, security measures, interactions between humans and computers, etc. Facial features and fingerprints are two of the most common human characteristics that may shift or alter throughout time. The nose, on the other hand, maintains a consistent structure that does not alter with the passage of time and possesses the singular capacity to fulfil the prerequisites of biometric attributes. This study gives a comprehensive review of how deep learning algorithms may be used to easily extract aspects of the human nose. In specifically, convolutional neural networks, also known as CNNs, are utilised for the purpose of feature extraction and classification when applied to big datasets that have numerous layers. The proposed methodology collects more private children's datasets, which contributes to a rise in the total number of datasets, which ultimately results in a rise in the 98.83 percent accuracy achieved. The results of this survey may be used to limit the material that is shared on social media by determining the age range of the participants, from under 18 to 18 and older.</span></p>T SangeethaK MythiliPrakasham PRagul Balaji S
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-062024-03-061010.4108/eetiot.5313Smart Assist System Module for Paralysed Patient Using IoT Application
https://publications.eai.eu/index.php/IoT/article/view/5315
<p>Those who are hearing impaired or hard of hearing face the most difficult challenges as a result of their handicap. To establish a bond or commit to something, people should be able to express their ideas and feelings via open channels of communication. To solve such issues, simple, transportable, and accurate assistive technology will probably be developed. The glove with sensors and an Arduino microcontroller is the major focus. This system was developed specifically to translate sign languages while analyzing gesture locations using smart technologies in custom gloves. The micro-controller identifies certain hand motions using sensors attached to gloves and converts sensor output data into text. Their capacity to converse may be aided by their ability to read the text on the mobile IOT application. Also, it aids in automating the houses of people with paralysis. It has the capacity to assess biological indicators like pulse and temperature as a patient monitoring device. The system will be put into place with the intention of enhancing the quality of life for people with disabilities and providing additional assistance in bridging the communication gap. It has a low price tag and a small design.</p>R Kishore KannaNihar Ranjan PradhanBhawani Sankar PanigrahiSanti Swarup BasaSarita Mohanty
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-062024-03-061010.4108/eetiot.5315A Comparative Analysis of Machine Learning and Deep Learning Approaches for Prediction of Chronic Kidney Disease Progression
https://publications.eai.eu/index.php/IoT/article/view/5325
<p>Chronic kidney disease is a significant health problem worldwide that affects millions of people, and early detection of this disease is crucial for successful treatment and improved patient outcomes. In this research paper, we conducted a comprehensive comparative analysis of several machine learning algorithms, including logistic regression, Gaussian Naive Bayes, Bernoulli Naive Bayes, Support Vector Machine, X Gradient Boosting, Decision Tree Classifier, Grid Search CV, Random Forest Classifier, AdaBoost Classifier, Gradient Boosting Classifier, XgBoost, Cat Boost Classifier, Extra Trees Classifier, KNN, MLP Classifier, Stochastic gradient descent, and Artificial Neural Network, for the prediction of kidney disease. In this study, a dataset of patient records was utilized, where each record consisted of twenty-five clinical features, including hypertension, blood pressure, diabetes mellitus, appetite and blood urea. The results of our analysis showed that Artificial Neural Network (ANN) outperformed other machine learning algorithms with a maximum accuracy of 100%, while Gaussian Naive Bayes had the lowest accuracy of 94.0%. This suggests that ANN can provide accurate and reliable predictions for kidney disease. The comparative analysis of these algorithms provides valuable insights into their strengths and weaknesses, which can help clinicians choose the most appropriate algorithm for their specific requirements.</p>Susmitha MandavaSurendra Reddy VintaHritwik GhoshIrfan Sadiq Rahat
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-072024-03-071010.4108/eetiot.5325A Comprehensive Review of Machine Learning’s Role within KOA
https://publications.eai.eu/index.php/IoT/article/view/5329
<p>INTRODUCTION: Knee Osteoarthritis (KOA) is a degenerative joint disease, that predominantly affects the knee joint and causes significant global disability. The traditional methods prevailing in this field for proper diagnosis are very subjective and time-consuming, which hinders early detection. This study explored the integration of artificial intelligence (AI) in orthopedics, specifically the field of machine learning (ML) applications in KOA.</p><p>OBJECTIVES: The objective is to assess the effectiveness of Machine learning in KOA, besides focusing on disease progression, joint detection, segmentation, and its classification. ML algorithms are also applied to analyze the MRI and X-ray images for their proper classification and forecasting. The survey spanning from 2018 to 2022 investigated the treatment-seeking behavior of individuals with OA symptoms.</p><p>METHODS: Utilizing deep learning (CNN, RNN) and various ML algorithms (SVM, GBM), this study examined KOA. Machine learning was used as a subset of AI, and it played a pivotal role in healthcare, particularly in the field of medical imaging. The analysis involved reviewing the studies from credible sources like Elsevier and Web of Science.</p><p>RESULTS: Current research in the field of medical imaging CAD revealed promising outcomes. Studies that utilized CNN demonstrated 80-90% accuracy on datasets like OAI and MOST, emphasizing its varied significance in vast clinical and imaging data archives.</p><p>CONCLUSION: This comprehensive analysis highlighted the evolving landscape of research in KOA. The role of machine learning in classification, segmentation, and diagnosis of severity is very much evident. The study also anticipates a future framework optimizing KOA detection and overall classification performance, with a strong emphasis on the potential for enhancement of knee osteoarthritis diagnostics.</p>Suman RaniMinakshi MemoriaTanupriya ChoudhuryAyan Sar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-072024-03-071010.4108/eetiot.5329A Novel Methodology for Hunting Exoplanets in Space Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5331
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Exoplanet exploration outside of our solar system has recently attracted attention among astronomers worldwide. The accuracy of the currently used detection techniques, such as the transit and radial velocity approaches is constrained. Researchers have suggested utilizing machine learning techniques to create a prediction model to increase the identification of exoplanets beyond our milky way galaxy.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The novel method proposed in this research paper builds a prediction model using a dataset of known exoplanets and their characteristics, such as size, distance from the parent star, and orbital period. The model is then trained using this data based on machine learning methods that Support Vector Machines and Random Forests.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: A different dataset of recognized exoplanets is used to assess the model’s accuracy, and the findings are compared with in comparison to accuracy rates of the transit and radial velocity approaches. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The prediction model created in this work successfully predicts the presence of exoplanets in the test data-set with an accuracy rate of over 90 percent.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: This discovery shows the promise and confidence of machine learning techniques for exoplanet detection.</span></p>Harsh Vardhan SinghNidhi AgarwalAshish Yadav
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-072024-03-071010.4108/eetiot.5331An empirically based object-oriented testing using Machine learning
https://publications.eai.eu/index.php/IoT/article/view/5344
<p>INTRODUCTION: The rapid growth of machine learning has the potential to revolutionize various industries and applications by automating complex tasks and enhancing efficiency. Effective software testing is crucial for ensuring software quality and minimizing resource expenses in software engineering. Machine learning techniques play a vital role in software testing by aiding in test case prioritization, predicting software defects, and analyzing test results.</p><p>OBJECTIVES: The primary objective of this study is to explore the use of machine learning algorithms for software defect prediction.</p><p>METHODS: Machine Learning models including Random Forest Classifier, Logistic Regression, K Nearest Neighbors, Gradient Boosting Classifiers, Catboost Classifier, and Convolutional Neural Networks have been employed for the study. The dataset includes a wide range of features relevant to software defect prediction and evaluates the performance of different prediction models. The study also focussed on developing hybrid models using stacking classifiers, which combine multiple individual models to improve accuracy.</p><p>RESULTS: The experimental results show that the hybrid models combining CatBoost and Convolutional Neural Network have outperformed individual models, achieving the highest accuracy of 89.5%, highlighting the effectiveness of combining machine learning algorithms for software defect prediction.</p><p>CONCLUSION: In conclusion, this study sheds light on the pivotal role of machine learning in enhancing software defect prediction.</p>Pusarla SindhuGiri Sainath PeruriMonisha Yalavarthi
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-082024-03-081010.4108/eetiot.5344Applied Deep learning approaches on canker effected leaves to enhance the detection of the disease using Image Embedding and Machine learning Techniques
https://publications.eai.eu/index.php/IoT/article/view/5346
<p class="ICST-abstracttext"><span lang="EN-GB">Canker, a disease that causes considerable financial losses in the agricultural business, is a small deep lesion that is visible on the leaves of many plants, especially citrus/apple trees. Canker detection is critical for limiting its spread and minimizing harm. To address this issue, we describe a computer vision-based technique that detects Canker in citrus leaves using image embedding and machine learning (ML) algorithms. The major steps in our proposed model include image embedding, and machine learning model training and testing. We started with preprocessing and then used image embedding techniques like Inception V3 and VGG 16 to turn the ROIs into feature vectors that retained the relevant information associated with Canker leaf disease, using the feature vectors acquired from the embedding stage, we then train and evaluate various ML models such as support vector machines (SVM), Gradient Boosting, neural network, and K Nearest Neighbor. Our experimental results utilizing a citrus leaf picture dataset show that the proposed strategy works. With Inception V3 as the image embedder and neural network machine learning model we have obtained an accuracy of 95.6% which suggests that our approach is effective in canker identification. Our method skips traditional image processing techniques that rely on by hand features and produces results equivalent to cutting-edge methods that use deep learning models. Finally, our proposed method provides a dependable and efficient method for detecting Canker in leaves. Farmers and agricultural specialists can benefit greatly from early illness diagnosis and quick intervention to avoid disease spread as adoption of such methods can significantly reduce the losses incurred by farmers and improve the quality of agricultural produce.</span></p>K Badri NarayananDevatha Krishna SaiKorrapati Akhil ChowdarySrinivasa Reddy K
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-082024-03-081010.4108/eetiot.5346Credit Card Deception Recognition Using Random Forest Machine Learning Algorithm
https://publications.eai.eu/index.php/IoT/article/view/5347
<p>INTRODUCTION: The credit card deception poses a global threat, resulting in significant monetary losses and identity theft. Detecting fraudulent transactions promptly is crucial for mitigating these losses. Machine learning algorithms, specifically the random forest algorithm, show promise in addressing this issue.</p><p>OBJECTIVES: This research paper presents a comprehensive study of numerous machine learning algorithms for credit card deception recognition, focusing on the random forest algorithm.</p><p>METHODS: To tackle the increasing fraud challenges and the need for more effective detection systems, we develop an advanced credit card deception detection system utilizing machine learning algorithms. We evaluate our system's performance using precision, recall, & F1-score metrics. Additionally, we provide various insights into the key features for fraud detection, empowering financial institutions to enhance their detection systems. The paper follows a structured approach.</p><p>RESULTS: We review existing work on credit card fraud detection, detail the dataset and pre-processing steps, present the random forest algorithm and its application to fraud detection, compare its performance against other algorithms, discuss fraud detection challenges, and propose effective solutions.</p><p>CONCLUSION: Finally, we conclude the research paper and suggest potential areas for future research. Our experiments demonstrate that the random forest algorithm surpasses other machine learning algorithms in accuracy, precision, recall, & F1-score. Moreover, the system effectively addresses challenges like imbalanced data and high-dimensional feature spaces. Our findings offer valuable insights into the most relevant features for fraud detection empowering financial organizations to improve their fraud detection capabilities.</p>Ishita JaiswalAnupama BharadwajKirti KumariNidhi Agarwal
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-082024-03-081010.4108/eetiot.5347Early-Stage Disease Prediction from Various Symptoms Using Machine Learning Models
https://publications.eai.eu/index.php/IoT/article/view/5361
<p>Development and exploration of several Data analytics techniques in various real-time applications (e.g., Industry, Healthcare Neuroscience) in various domains have led to exploitation of it to extract paramount features from datasets. Following the introduction of new computer technology, the health sector had a significant transformation that compelled it to produce more medical data, which gave rise to a number of new disciplines of study. Quite a few initiatives are made to deal with the medical data and how its usage can be helpful to humans. This inspired academics and other institutions to use techniques like data analytics, its types, machine learning and different algorithms, to extract practical information and aid in decision-making. The healthcare data can be used to develop a health prediction system that can improve a person's health. Based on the dataset provided, making accurate predictions in early disease prediction benefits the human community.</p>Devansh AjmeraTrilok Nath PandeyShrishti SinghSourasish PalShrey VyasChinmaya Kumar Nayak
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5361Efficient Usage of Energy Infrastructure in Smart City Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5363
<p class="ICST-abstracttext"><span lang="EN-GB">The concept of smart cities revolves around utilizing modern technologies to manage and optimize city operations, including energy infrastructure. One of the biggest problems that smart cities have to deal with is ensuring the efficient usage of energy infrastructure to reduce energy consumption, cost, and environmental impact. Machine learning is a powerful tool that can be utilized to optimize energy usage in smart cities. This paper proposes a framework for efficient usage of energy machine learning for city infrastructure in smart cities. The proposed framework includes three main components: data collection, machine learning model development, and energy infrastructure optimization. The data collection component involves collecting energy consumption data from various sources, such as smart meters, sensors, and other IoT devices. The collected data is then pre-processed and cleaned to remove any inconsistencies or errors. The machine learning model development component involves developing machine learning models to predict energy consumption and optimize energy usage. The models can be developed using various techniques such as regression, classification, clustering, and deep learning. These models can predict energy consumption patterns based on historical data, weather conditions, time of day, and other factors. The energy infrastructure optimization component involves utilizing the machine learning models to optimize energy usage. The optimization process involves adjusting energy supply and demand to reduce energy consumption and cost. The optimization process can be automated, and SVM based machine learning models can continuously enhance their precision over time by studying the data. The proposed framework has several benefits, including reducing energy consumption, cost, and environmental impact. It can also improve the reliability and stability of energy infrastructure, reduce the risk of blackouts, and improve the overall quality of life in highly developed urban areas. Last but not least, the projected framework for efficient usage of energy machine learning for city infrastructure in smart cities is a promising solution to optimize energy usage and reduce energy consumption and cost. The framework can be implemented in various smart city applications, including buildings, transportation, and industrial processes.</span></p>Rajesh RajaanBhaskar Kamal BaishyaTulasi Vigneswara RaoBalachandra PattanaikMano Ashish TripathiAnitha R
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5363Enhancing Audio Accessory Selection through Multi-Criteria Decision Making using Fuzzy Logic and Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5364
<p class="ICST-abstracttext"><span lang="EN-GB">This research paper aims to investigate the significance of electrical products, specifically earbuds and headphones, in the digital world. The processes of decision-making and purchasing of audio accessories are often characterized by a significant investment of time and effort, as well as a complex interplay of competing priorities. In addition, various methodologies are employed for the selection and procurement of audio equipment through the utilization of machine learning algorithms. This study aimed to gather responses from a diverse group of participants regarding their preferences for the latest functionalities and essential components in their gadgets. The data was collected through a questionnaire that provided multiple options about the specifications of the audio accessories for the participants to choose from. The study employed seven distinct input factors to elicit responses from participants. These factors included brand, type, design, fit, price, noise cancellation, and folding design. The quantification of each input parameter was executed through the utilization of a scaling function in the Fuzzy Logic Interface, which assigned the labels “Yes” or “No” to each parameter. In this study, the Mamdani approach, which is a widely used fuzzy reasoning tool, was employed to develop a fuzzy logic controller (FLC) consisting of seven input and one output processes. In this study, standard fuzzy algorithms were employed to enhance the accuracy of the process of selecting an audio accessory in accordance with the user's specific requirements on the basis of Fuzzy threshold where “Yes” signifies about the availability of such audio accessory and “No” refers to the non-availability and readjustment of the input parameters.</span></p>Sagar Mousam ParidaSagar Dhanraj PandeNagendra Panini ChallaBhawani Sankar Panigrahi
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5364Enhancing Heart Disease Prediction Accuracy Through Hybrid Machine Learning Methods
https://publications.eai.eu/index.php/IoT/article/view/5367
<p>INTRODUCTION: Over the past few decades, heart disorders have been the leading cause of mortality worldwide. People over 55 must get a thorough cardiovascular examination to prevent heart disease or coronary sickness and identify early warning signs. To increase the ability of healthcare providers to recognize cardiovascular illness, researchers and experts have devised a variety of clever ways.</p><p>OBJECTIVES: The goal of this research was to propose a robust strategy for cardiac issue prediction utilizing machine learning methods. The healthcare industry generates a massive quantity of data and machine learning has proved effective in making decisions and generating predictions with this data. </p><p>METHODS: Al has been exhibited to be useful in helping with forecast and decision-production because of the tremendous measure of information made by the medical services a 20 Few explorers have inspected the capability of Al to figure out heart disease. In this article, we suggest a creative strategy. to improve the exactness of cardiovascular sickness forecasts by finding basic highlights utilizing Al systems.</p><p>CONCLUSION: There is a lot of promise and possibility in using machine learning techniques to forecast cardiac disease. By means of examining a range of datasets and applying multiple machine-learning methods. Alongside various element blends and not able arrangement procedures, the expectation model is presented. We accomplish a better exhibition level with a Crossbreed Irregular Woods, with a Direct Model as our coronary illness forecast model.</p>Nukala Sujata GuptaSaroja Kumar RoutShekharesh BarikRuth Ramya KalangiB Swampa
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5367Improving Student Grade Prediction Using Hybrid Stacking Machine Learning Model
https://publications.eai.eu/index.php/IoT/article/view/5369
<p class="ICST-abstracttext"><span lang="EN-GB">With increasing technical procedures, academic institutions are adapting to a data-driven decision-making approach of which grade prediction is an integral part. The purpose of this study is to propose a hybrid model based on a stacking approach and compare its accuracy with those of the individual base models. The model hybridizes K-nearest neighbours, Random forests, XGBoost and multi-layer perceptron networks to improve the accuracy of grade prediction by enabling a combination of strengths of different algorithms for the creation of a more robust and accurate model. The proposed model achieved an average overall accuracy of around 90.9% for 10 epochs, which is significantly higher than that achieved by any of the individual algorithms of the stack. The results demonstrate the improvement of prediction results but using a stacking approach. This study has significant implications for academic institutions which can help them make informed grade predictions for the improvement of student outcomes.</span></p>Saloni ReddySagar Dhanraj Pande
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5369Integrating Intellectual Consciousness AI based on Ensemble Machine Learning for Price Negotiation in E-commerce using Text and Voice-Based Chatbot
https://publications.eai.eu/index.php/IoT/article/view/5370
<p class="ICST-abstracttext"><span lang="EN-GB">Online shopping has experienced an enormous boost in recent years. With this evolution, the majority of internet shopping's capabilities have been developed, but some functions, such as negotiating with store owners, are still not available. This paper suggests employing a chatbot with a voice assistant to negotiate product prices. Customers can communicate with the chatbot to get assistance in finding a reasonable price for a product. In online purchasing, there is a possibility that the consumers or the <span style="letter-spacing: -.05pt;">product</span> <span style="letter-spacing: -.05pt;">seller's</span> <span style="letter-spacing: -.05pt;">budget</span> may be compromised. In order to assist in purchasing, algorithm has been created in machine learning that uses the forecasting of historical data to avoid compromising situations. However, improper dataset or when irrelevant aspects or at- tributes of the data are used, price prediction might become less accurate. Ecommerce companies do not merely depend on price prediction tools due to the significant financial losses brought on even by a single inaccurate price prediction. Additionally, few models fail to perform well when the data saturates or when an attribute becomes inaccessible after the period for which the model's prediction was reliant. By controlling these alterations, the accuracy and dependability are preserved in the model pro- posed in this study.</span></p>Yagnesh ChallagundlaLohitha Rani ChintalapatiTrilok Sai Charan TunuguntlaAnupama NamburuSrinivasa Reddy KJanjhyam Venkata Naga Ramesh
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5370Prediction and Analysis of Bitcoin Price using Machine learning and Deep learning models
https://publications.eai.eu/index.php/IoT/article/view/5379
<p>High Accessibility and Easy Investment makes Cryptocurrency an important income source for many people. Cryptocurrency is a kind of Digital/Virtual currency which is created using blockchain Technology and is protected by Cryptography. Cryptocurrencies enables users to Accept, Transfer and request the capital between the Users without the requirement of intermediaries such as banks. Now a day many Cryptocurrencies are available across the world such as Bitcoin, Litecoin, Monero, Dogecoin etc. This study is more determined over a very famous and demanding Cryptocurrency known as Bitcoin over the past years. Here, firstly we make an effort to predict the price of bitcoin by examining numerous numbers of parameters that affect the cost of bitcoin. Different kinds of Machine learning models will be used to estimate the price of Bitcoin. This study provides the accuracy and precision of each model that are used in this study and determine the suitable method to estimate the price more accurately.</p>Vinay KarnatiLakshmi Dathatreya KannaTrilok Nath PandeyChinmaya Kumar Nayak
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5379Prediction of Intermittent Demand Occurrence using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5381
<p class="ICST-abstracttext"><span lang="EN-GB">Demand forecasting plays a pivotal role in modern Supply Chain Management (SCM). It is an essential part of inventory planning and management and can be challenging at times. One of the major issues being faced in demand forecasting is insufficient forecast accuracy to predict the expected demand and fluctuation in actual vs. the predicted demand results in fore-casting errors. This problem is further exaggerated with slow-moving and intermittent demand items. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">Every organization encounters large proportions of items that have small ir-regular demand with long periods of zero demand, which are known as intermittent demand Items. Demand for such items occur sporadically and with considerable fluctuation in the size of the demand. Forecasting of the intermittent demand entails the prediction of demand series that is characterized by the time interval between demand being significantly greater than the unit forecast period. Because of this there are multiple periods of no demand in the intermittent demand time series. The challenge with these products with low irregular demand is that these items need to be stocked and replenished at regular interval irrespective of the demand cycle, thus adding to the cost of holding the inventory. Since the demand is not continuous, Traditional Forecasting models are unable to provide reliable estimate of required inventory level and replenishment point. Forecast errors would resulting in obsolescent stock or unfulfilled demand. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">The current paper presents a simple yet powerful approach for generating a demand forecasting and replenishment process for such low volume intermittent demand items to come up with a recommendation for dynamic re-order point, thus, improving the inventory performance of these items. Currently, the demand forecast is generally based on past usage patterns. The rise of Artificial Intelligence/Machine Learning (AI/ML) has provided a strong alternative to solve the problem of forecasting Intermittent Demand. The intention is to highlight that machine learning algorithm is more efficient and accurate than traditional forecasting method. As we move forward to industry 4.0, the digital supply chain is considered as the most essential com-ponent of the value chain wherein the inventory size is controlled, and the demand predicted.</span></p>Ashish K SinghJ B SimhaRashmi Agarwal
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5381Stage by stage E- Ecommerce market database analysis by using machine learning models
https://publications.eai.eu/index.php/IoT/article/view/5383
<p>In the recent era, advertising strategies are far more sophisticated than those of their predecessors. In marketing, business contacts are essential for online transactions. For that, communication needs to develop a database; this database marketing is also one of the best techniques to enhance the business and analyze the market strategies. Businesses may improve consumer experiences, streamline supply chains, and generate more income by analyzing E-Commerce market datasets using machine learning models. In the ever-changing and fiercely competitive world of e-commerce, the multi-stage strategy guarantees a thorough and efficient use of machine learning. Analyzing the database can help to understand the user's or industry's current requirements. Machine Learning models are developed to support the marketing sector. This machine learning model can efficiently operate or analyze e-commerce in different stages, i.e., systematic setup, status analysis, and model development with the implementation process. Using these models, it is possible to analyze the marketing database and create new marketing strategies for distributing marketing objects, the percentage of marketing channels, and the composition of marketing approaches based on the analysis of the marketing database. It underpins marketing theory, data collection, processing, and positive and negative control samples. It is suggested that e-commerce primarily adopt the database marketing method of the model prediction. This is done by substituting the predicted sample into the model for testing. The issue of unequal marketing item distribution may be resolved by machine learning algorithms on the one hand, and prospective customer loss can be efficiently avoided on the other. Also, a proposal for an application approach that enhances the effectiveness of existing database marketing techniques and supports model prediction is made.</p>Narendra RyaliNikita ManneA RavisankarMano Ashish TripathiRavindra TripathiM Venkata Naresh
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5383Artificial Intelligence in Intellectual Property Protection: Application of Deep Learning Model
https://publications.eai.eu/index.php/IoT/article/view/5388
<p>To create and train a deep learning model costs a lot in comparison to ascertain a trained model. So, a trained model is considered as the intellectual property (IP) of the person who creates such model. However, there is every chance of illegal copying, redistributing and abusing of any of these high-performance models by the malicious users. To protect against such menaces, a few numbers of deep neural networks (DNN) IP security techniques have been developed recently. The present study aims at examining the existing DNN IP security activities. In the first instance, there is a proposal of taxonomy in favor of DNN IP protection techniques from the perspective of six aspects such as scenario, method, size, category, function, and target models. Afterwards, this paper focuses on the challenges faced by these methods and their capability of resisting the malicious attacks at different levels by providing proactive protection. An analysis is also made regarding the potential threats to DNN IP security techniques from various perspectives like modification of models, evasion and active attacks.</p><p>Apart from that this paper look into the methodical assessment. The study explores the future research possibilities on DNN IP security by considering different challenges it would confront in the process of its operations.</p><p>Result Statement: A high-performance deep neural Networks (DNN) model is costlier than the trained DNN model. It is considered as an intellectual property (IP) of the person who is responsible for creating DNN model. The infringement of the Intellectual Property of DNN model is a grave concern in recent years. This article summarizes current DNN IP security works by focusing on the limitations/ challenges they confront. It also considers the model in question's capacity for protection and resistance against various stages of attacks.</p>Parthasarathi PattnayakTulip DasArpeeta MohantySanghamitra Patnaik
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5388Deep Learning Techniques for Identification of Different Malvaceae Plant Leaf Diseases
https://publications.eai.eu/index.php/IoT/article/view/5394
<p><strong> </strong>INTRODUCTION: The precise and timely detection of plant diseases plays a crucial role in ensuring efficient crop management and disease control. Nevertheless, conventional methods of disease identification, which heavily rely on manual visual inspection, are often time-consuming and susceptible to human error. The knowledge acquired from this research paper enhances the overall comprehension of the discipline and offers valuable direction for future progressions in the application of deep learning for the identification of plant diseases.[1][2]</p><p>AIM: to investigate the utilization of deep learning techniques in identifying various Malvaceae plant diseases.</p><p>METHODS: AlexNet, VGG, Inception, REsNet and other CNN architectures are analyzed on Malvaceae plant diseases specially on Cotton, Ocra and Hibiscus, different data collection methods ,Data augmentation and Normalization techniques.</p><p>RESULTS: Inception V4 have Training Accuracy 98.58%, VGG-16 have Training Accuracy 84.27%, ResNet-50 have Training Accuracy 98.72%, DenseNet have Training Accuracy 98.87%, Inception V4 have Training Loss 0.01%, VGG-16 have Training Loss 0.52%, ResNet-50 have Training Loss 6.12%, DenseNet have Training Loss 0.016%, Inception V4 have Test Accuracy 97.59%, VGG-16 have Test accuracy 82.75%, ResNet-50 have Test Accuracy 98.73%, DenseNet have Test Accuracy 99.81%, Inception V4 have Test Loss 0.0586%, VGG-16 have Test Loss 0.64%, ResNet-50 have Test Loss 0.027%, DenseNet have Test Loss 0.0154% .</p><p>CONCLUSION: conclusion summarizes the key findings and highlights the potential of deep learning as a valuable tool for accurate and efficient identification of Malvaceae plant diseases.</p>Mangesh K NichatSanjay E Yedey
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-132024-03-131010.4108/eetiot.5394A Deep Learning Approach for Ship Detection Using Satellite Imagery
https://publications.eai.eu/index.php/IoT/article/view/5435
<p>INTRODUCTION: This paper addresses ship detection in satellite imagery through a deep learning approach, vital for maritime applications. Traditional methods face challenges with large datasets, motivating the adoption of deep learning techniques.</p><p>OBJECTIVES: The primary objective is to present an algorithmic methodology for U-Net model training, focusing on achieving accuracy, efficiency, and robust ship detection. Overcoming manual limitations and enhancing real-time monitoring capabilities are key objectives.</p><p>METHOD: The methodology involves dataset collection from Copernicus Open Hub, employing run-length encoding for efficient preprocessing, and utilizing a U-Net model trained on Sentinel-2 images. Data manipulation includes run-length encoding, masking, and balanced dataset preprocessing.</p><p>RESULT: Results demonstrate the proposed deep learning model's effectiveness in handling diverse datasets, ensuring accuracy through U-Net architecture, and addressing imbalances. The algorithmic process showcases proficiency in ship detection.</p><p>CONCLUSION: In conclusion, this paper contributes a comprehensive methodology for ship detection, significantly advancing accuracy, efficiency, and robustness in maritime applications. The U-Net-based model successfully automates ship detection, promising real-time monitoring enhancements and improved maritime security.</p>Alakh NiranjanSparsh PatialAditya AryanAkshat MittalTanupriya ChoudhuryHamidreza Rabiei-DastjerdiPraveen Kumar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-152024-03-151010.4108/eetiot.5435A Study of the Application of AI & ML to Climate Variation, with Particular Attention to Legal & Ethical Concerns
https://publications.eai.eu/index.php/IoT/article/view/5468
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: This research investigates the utilization of artificial intelligence and machine learning in comprehending various climatic variations, emphasizing the associated use of legal and ethical considerations. This escalating impact of climatic change necessitates innovative approaches and the potential of AI/ML to offer tools for analysis and prediction.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The primary objective here, was to assess the effectiveness of AI/ML in the deciphering of varying climatic patterns and projecting the future trends. Concurrently, this study aims for the identification and analysis of legal and ethical challenges that may arise from the integration of these technologies in climatic research and policy.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: Here, the literature review forms the basis for understanding various AI/ML applications related to climate science. This study employs various case analyses to examine the existing models to gauge the accuracy and efficiency of predictions. Legal frameworks and ethical principles need to be scrutinized through the qualitative analysis of relevant policies and guidelines.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: This extensive research reveals the various significant contributions of AI/ML in the enhancement of climatic modeling precision and the prediction of extreme events. However legal and ethical considerations such as data privacy, accountability, and transparency also emerged as crucial challenges which required careful attention.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: While AI/ML exhibited great potential in the advancement of climate research, a balanced approach is imperative to navigate the associated legal and ethical concerns. Striking this equilibrium will be pivotal for ensuring responsible and effective deployment of these technologies in the pursuit of best understanding and mitigating varying climatic variations.</span></p>Maheshwari Narayan JoshiAnil Kumar DixitSagar SaxenaMinakshi MemoriaTanupriya ChoudhuryAyan Sar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-192024-03-191010.4108/eetiot.5468An AI-Enabled Blockchain Algorithm: A Novel Approach to Counteract Blockchain Network Security Attacks
https://publications.eai.eu/index.php/IoT/article/view/5484
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: In this research, we present a novel method for strengthening the security of blockchain networks through the use of AI-driven technology. Blockchain has emerged as a game-changing technology across industries, but its security flaws, particularly in relation to Sybil and Distributed Denial of Service (DDoS) attacks, are a major cause for worry. To defend the blockchain from these sophisticated attacks, our research centres on creating a strong security solution that combines networks of Long Short-Term Memory (LSTM) and Self-Organizing Maps (SOM).</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The main goal of this project is to create and test an AI-driven blockchain algorithm that enhances blockchain security by utilising LSTM and SOM networks. These are the objectives that the research hopes to achieve: In order to assess the shortcomings and weaknesses of existing blockchain security mechanisms. The goal is to create a new approach that uses LSTM sequence learning and SOM pattern recognition to anticipate and stop security breaches. In order to see how well this integrated strategy works in a simulated blockchain setting against different types of security risks.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: The methods used in our study are based on social network analysis. A combination of support vector machines (SOM) for pattern recognition and long short-term memory (LSTM) networks for learning and event sequence prediction using historical data constitutes the methodology. The steps involved in conducting research are: The current state of blockchain security mechanisms is examined in detail. Creating a virtual blockchain and incorporating the SOM+LSTM algorithm.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">Putting the algorithm through its paces in order to see how well it detects and defends against different security risks.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: Significant enhancements to blockchain network security are the primary outcomes of this study. Important results consist of: Using the SOM+LSTM technique, we were able to increase the detection rates of possible security risks, such as Sybil and DDoS attacks. Enhanced reaction times when compared to conventional security techniques for attack prediction and prevention. Demonstrated ability of the algorithm to adapt and learn from new patterns of attacks, assuring long-term sustainability.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: This paper's findings highlight the efficacy of enhancing blockchain security through the integration of artificial intelligence technologies such as LSTM and SOM networks. In addition to improving blockchain technology's detection and forecasting capabilities, the SOM+LSTM algorithm helps advance the platform toward greater security and reliability. This study provides a solid answer to the increasing worries about cyber dangers in the modern era and opens the door to more sophisticated AI uses in blockchain security.</span></p>Anand Singh RajawatS B GoyalManoj KumarThipendra P Singh
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-202024-03-201010.4108/eetiot.5484Artificial Intelligence-based Legal Application for Resolving Issues Related to Live-In Relationship
https://publications.eai.eu/index.php/IoT/article/view/5485
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: The societal landscape in India has witnessed a very transformative shift in the perspectives on relationships, with an increasing prevalence of live-in couples challenging the traditional norms of marriage. However, this ongoing trend has brought about a huge surge in legal complexities, including recognition, partner rights, property disputes, and inheritance issues. This study proposed an innovative approach that leveraged the potential of Artificial Intelligence and Automatic speech recognition for the registration and redressal of live-in relationship matters.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: This research explores and seeks for the optimization of the resolution of live-in relationship disputes which occurs in the legal perspective with the help of an AI-based platform. The primary goal of this research was to overcome the physical barriers while ensuring the correct accessibility to legal procedures for the registration and addressing of the grievances related to live-in relationships. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: Here, the methodology followed, starting from the thorough review which was conducted using different resources from Scopus, PubMed, and ResearchGate. This research explored the increasing complaints and varying victim counts in live-in relationship cases. This finally attributed to these issues to a lack of physical access to legal remedies.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: This study also emphasized the major significance of AI-driven redressal processes in the real-time alleviation of the hurdles and challenges associated with live-in relationship cases. The proposed framework and platform aimed to offer an alternative means for the individuals who were unable to physically approach the authorities, facilitating a more efficient and seamless way of legal resolution more quickly.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: This study advocates for the integration of AI and AST technologies in the legal domain, specifically for addressing live-in relationship issues. The implementation of such a system had the potential to bridge gaps in its accessibility, thereby contributing to a more inclusive and efficient legal framework for individuals who are passionately involved in live-in relationships.</span></p>Pallavi GusainPoonam RawatMinakshi MemoriaTanupriya ChoudhuryAyan Sar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-202024-03-201010.4108/eetiot.5485Leveraging AI and Blockchain for Privacy Preservation and Security in Fog Computing
https://publications.eai.eu/index.php/IoT/article/view/5555
<p>INTRODUCTION: Cloud computing's offshoot, fog computing, moves crucial data storage, processing, and networking capabilities closer to the people who need them. There are certain advantages, such improved efficiency and lower latency, but there are also some major privacy and security concerns. For these reasons, this article presents a new paradigm for fog computing that makes use of blockchain and Artificial Intelligence (AI).</p><p>OBJECTIVES: The main goal of this research is to create and assess a thorough framework for fog computing that incorporates AI and blockchain technology. With an emphasis on protecting the privacy and integrity of data transactions and streamlining the management of massive amounts of data, this project seeks to improve the security and privacy of Industrial Internet of Things (IIoT) systems that are cloud-based.</p><p>METHODS: Social network analysis methods are utilised in this study. The efficiency and accuracy of data processing in fog computing are guaranteed by the application of artificial intelligence, most especially Support Vector Machine (SVM), due to its resilience in classification and regression tasks. The network's security and reliability are enhanced by incorporating blockchain technology, which creates a decentralised system that is tamper resistant. To make users' data more private, zero-knowledge proof techniques are used to confirm ownership of data without actually disclosing it.</p><p> RESULTS: When applied to fog computing data, the suggested approach achieves a remarkable classification accuracy of 99.8 percent. While the consensus decision-making process of the blockchain guarantees trustworthy and secure operations, the support vector machine (SVM) efficiently handles massive data analyses. Even in delicate situations, the zero-knowledge proof techniques manage to keep data private. When these technologies are integrated into the fog computing ecosystem, the chances of data breaches and illegal access are greatly reduced.</p><p>CONCLUSION: Fog computing, which combines AI with blockchain, offers a powerful answer to the privacy and security issues with cloud centric IIoT systems. Combining SVM with AI makes data processing more efficient, while blockchain's decentralised and immutable properties make it a strong security measure. Additional security for user privacy is provided via zero-knowledge proofs. Improving the privacy and security of fog computing networks has never been easier than with this novel method.</p>S B GoyalAnand Singh RajawatManoj KumarPrerna Agarwal
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-262024-03-261010.4108/eetiot.5555Lightweight Cryptography for Internet of Things: A Review
https://publications.eai.eu/index.php/IoT/article/view/5565
<p class="ICST-abstracttext"><span lang="EN-GB">The paper examines the rising significance of security in Internet of Things (IoT) applications and emphasizes the need for lightweight cryptographic solutions to protect IoT devices. It acknowledges the growing prevalence of IoT in various fields, where sensors collect data, and computational systems process it for action by actuators. Due to IoT devices' resource limitations and networked nature, security is a concern. The article compares different lightweight cryptographic block cipher algorithms to determine the best approach for securing IoT devices. It also discusses the merits of hardware versus software solutions and explores potential security threats, including intrusion and manipulation. Additionally, the article outlines future work involving the implementation of the trusted Advanced Standard Encryption block cipher in IoT devices, including its use in quick-response (QR) code scanning and messaging platforms. It acknowledges existing drawbacks and suggests areas for improvement in IoT system performance and security.</span></p>AmritaChika Paul EkwuemeIbrahim Hussaini AdamAvinash Dwivedi
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-272024-03-271010.4108/eetiot.5565Identification of Lithology from Well Log Data Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5634
<p>INTRODUCTION: Reservoir characterisation and geomechanical modelling benefit significantly from diverse machine learning techniques, addressing complexities inherent in subsurface information. Accurate lithology identification is pivotal, furnishing crucial insights into subsurface geological formations. Lithology is pivotal in appraising hydrocarbon accumulation potential and optimising drilling strategies.</p><p>OBJECTIVES: This study employs multiple machine learning models to discern lithology from the well-log data of the Volve Field.</p><p>METHODS: The well log data of the Volve field comprises of 10,220 data points with diverse features influencing the target variable, lithology. The dataset encompasses four primary lithologies—sandstone, limestone, marl, and claystone—constituting a complex subsurface stratum. Lithology identification is framed as a classification problem, and four distinct ML algorithms are deployed to train and assess the models, partitioning the dataset into a 7:3 ratio for training and testing, respectively.</p><p>RESULTS: The resulting confusion matrix indicates a close alignment between predicted and true labels. While all algorithms exhibit favourable performance, the decision tree algorithm demonstrates the highest efficacy, yielding an exceptional overall accuracy of 0.98.</p><p>CONCLUSION: Notably, this model's training spans diverse wells within the same basin, showcasing its capability to predict lithology within intricate strata. Additionally, its robustness positions it as a potential tool for identifying other properties of rock formations.</p>RohitShri Ram MandaAditya RajAkshay DheerajGopal Singh RawatTanupriya Choudhury
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-04-042024-04-041010.4108/eetiot.5634Robust GAN-Based CNN Model as Generative AI Application for Deepfake Detection
https://publications.eai.eu/index.php/IoT/article/view/5637
<p>One of the most well-known generative AI models is the Generative Adversarial Network (GAN), which is frequently employed for data generation or augmentation. In this paper a reliable GAN-based CNN deepfake detection method utilizing GAN as an augmentation element is implemented. It aims to give the CNN model a big collection of images so that it can train better with the intrinsic qualities of the images. The major objective of this research is to show how GAN innovations have enhanced and increased the use of generative AI principles, particularly in fake image classification called Deepfakes that poses concerns about misrepresentation and individual privacy. For identifying these fake photos more synthetic images are created using the GAN model that closely resemble the training data. It has been observed that GAN-augmented datasets can improve the robustness and generality of CNN-based detection models, which correctly identify between real and false images by 96.35%.</p>Preeti SharmaManoj KumarHitesh Kumar Sharma
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-04-042024-04-041010.4108/eetiot.5637A Self-Supervised GCN Model for Link Scheduling in Downlink NOMA Networks
https://publications.eai.eu/index.php/IoT/article/view/6039
<p>INTRODUCTION: Downlink Non-Orthogonal Multiple Access (NOMA) networks pose challenges in optimizing power allocation efficiency due to their complex design.<br>OBJECTIVES: This paper aims to propose a novel scheme utilizing Graph Neural Networks to address the optimization challenges in downlink NOMA networks.<br>METHODS: We transform the optimization problem into an optimal link scheduling problem by modeling the network as a bipartite graph. Leveraging Graph Convolutional Networks, we employ self-supervised learning to learn the optimal link scheduling strategy.<br>RESULTS: Simulation results showcase a significant enhancement in power allocation efficiency in downlink NOMA networks, evidenced by notable improvements in both average accuracy and generalization ability. CONCLUSION: Our proposed scheme demonstrates promising potential in substantially augmenting power allocation efficiency within downlink NOMA networks, offering a promising avenue for further research and application in wireless communications.</p>Caiya ZhangFang FangCongsong Zhang
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-05-132024-05-131010.4108/eetiot.6039Design and Implementation of an IOT-based Smart Home Automation System in Real World Scenario
https://publications.eai.eu/index.php/IoT/article/view/6201
<p>INTRODUCTION: Automation developments have significantly increased general convenience in modern technology. Our study focuses on developing and deploying an Internet of Things (IoT)–enabled smart home automation system that maximizes energy efficiency and improves convenience in residential settings. Our technology provides homes with an intelligent environment by smoothly combining several sensors, actuators, and communication protocols. We explore the challenges of creating a reliable and useful smart home system<br />that works well in everyday situations. We investigate the difficulties, architectural issues, communication protocols, and security features specific to these systems.<br />OBJECTIVES: This project aims to use a database server and Wi-Fi module to develop an effective and reasonably priced smart home automation system. It improves accessibility and convenience by enabling smartphone voice control of household appliances.<br />METHODS: This project was developed using various implementation techniques, including wireless home automation via GSM technology, speech recognition-based systems, Blynk, and the Internet of Things. These techniques added up to a reliable and effective smart home system.<br />RESULTS: Although it improved user routines, the IoT-based smart home system had latency problems. Interoperability issues are still present. Improvements were guided by user input. Using Blynk, our software manages loads remotely. <br />CONCLUSION: A home automation system that is inexpensive and locally supplied can effectively operate household appliances. It is adaptable, scalable, and a component of future smart homes. IoT applications in real life revolutionize convenience and efficiency in living environments.</p>Arka AdhikarySaroj HalderRayith BoseShuvadeep PanjaSourav HalderJayanta PratiharArindam Dey
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-06-042024-06-041010.4108/eetiot.6201Privacy Preserving Authentication of IoMT in Cloud Computing
https://publications.eai.eu/index.php/IoT/article/view/6235
<p>INTRODUCTION: The Internet of Medical Things (IoMT) blends the healthcare industry with the IoT ecosystem and enables the creation, collection, transmission, and analysis of medical data through IoT networking. IoT networks consist of various healthcare IT systems, healthcare sensors, and healthcare management software. <br>OBJECTIVES: The IoMT breathes new life into the healthcare system by building a network that is intelligent, accessible, integrated, and effective. Privacy-preserving authentication in IoMT is difficult due to the distributed communication environment of heterogeneous IoMT devices. Although there has been numerous research on potential IoMT device authentication methods, there is still more to be done in terms of user authentication to deliver long-term IoMT solutions. However, password handling is one of the big challenges of IoMT. <br>METHODS: In this paper, we present an IoMT-related online password-less authentication technique that is quick, effective, and safe. In order to offer cross-platform functionality, the article includes a simulation of FIDO2/WebAuthn, one of the most recent standards for a password-less authentication mechanism. <br>RESULTS: This makes it easier to secure user credentials and improve them while preserving anonymity. The IoMT device authentication process and registration process delays are also assessed. <br>CONCLUSION: Results and simulations show that the efficacy of the proposed mechanism with quick authentication on cloud servers may be accomplished with the fewest registration and authentication procedures, regardless of device setup.</p>Garima MisraB. HazelaB.K. Chaurasia
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-06-032024-06-031010.4108/eetiot.6235Real-Time Remote-Controlled Human Manipulation Medical Robot Using IoT Module
https://publications.eai.eu/index.php/IoT/article/view/6241
<p>INTRODUCTION: Innovative robotics and advanced computer vision technology converge in the Human Manipulation-Controlled Robot, utilized for medical applications. The robot operates through human gestures, and includes a camera module for real-time visual feedback, enhancing its functionality and user interaction.</p><p>OBJECTIVES: The primary goal of the research was to harness the natural expressiveness of human gestures to provide a more intuitive and engaging method of controlling medical robots. The focus is on enabling precise control through programmed responses to specific gestures, ensuring effective interaction with medical tasks.</p><p>METHODS: The robot’s hardware configuration consists of a mobile platform with motorized components, an ESP32 module, gesture recognition sensors and a camera modules. The ESP32 module interprets signals from gesture recognition sensors to execute precise commands for the robot's movements and actions. Simultaneously, the camera module captures live footage, providing visual feedback through an intuitive interface for seamless interaction.</p><p>RESULTS: The Human Manipulation-Controlled Robot has been successfully developed, featuring a fetch arm capable of autonomous movement and object manipulation. This research address critical needs in medical centers, demonstrating the feasibility of using only minimalistic EEG electrode wireless transmission to operate a robot effectively. </p><p>CONCLUSION: Through the provision of a more intuitive and engaging method of controlling and interacting with medical robots, this innovation has the potential to significantly improve user experience. It represents a most important development in medical robotic vehicles, enhancing user experience and operational efficiency through advanced human-robot interaction techniques.</p>R. Kishore KannaBhawani Sankar PanigrahiSwati SucharitaB PravallikaSusanta Kumar SahooPriya Gupta
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-06-192024-06-191010.4108/eetiot.6241Developing a Deep Learning-Based Multimodal Intelligent Cloud Computing Resource Load Prediction System
https://publications.eai.eu/index.php/IoT/article/view/6296
<p class="ICST-abstracttext"><span lang="EN-GB">This study aims to predict the dynamic changes in critical cloud computing resource indicators, namely Central Processing Unit (CPU), Random Access Memory (RAM), hard disk (Disk), and network. Its primary objective is to optimize resource allocation strategies in advance to enhance overall system performance. The research employs various deep learning algorithms, including Simple Recurrent Neural Network (SRNN), Bidirectional Simple Recurrent Neural Network (BiSRNN), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). Through experimentation with different algorithm combinations, the study identifies optimal models for each specific resource indicator. Results indicate that combining CNN, LSTM, and GRU yields the most effective predictions for CPU load, while CNN and LSTM together are optimal for RAM load prediction. For disk load prediction, GRU alone proves optimal, and BiSRNN emerges as the optimal choice for network load prediction. The training results of these models demonstrate R-squared values (R²) exceeding 0.98, highlighting their high accuracy in predicting future resource dynamics. This precision facilitates timely and efficient resource allocation, thereby enhancing system responsiveness. The study's multimodal precise prediction capability supports prompt and effective resource allocation, further enhancing system responsiveness. Ultimately, this approach significantly contributes to sustainable digital advancement for enterprises by ensuring efficient resource allocation and consistent optimization of system performance. The study underscores the importance of integrating advanced deep learning techniques in managing cloud computing resources, thereby supporting the robust and sustainable growth of digital infrastructure.</span></p>Ruey-Chyi Wu
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-192024-07-191010.4108/eetiot.6296A Multifaceted Approach at Discerning Redditors Feelings Towards ChatGPT
https://publications.eai.eu/index.php/IoT/article/view/6447
<p class="ICST-abstracttext"><span lang="EN-GB">Generative AI platforms like ChatGPT have leapfrogged in terms of technological advancements. Traditional methods of scrutiny are not enough for assessing their technological efficacy. Understanding public sentiment and feelings towards ChatGPT is crucial for pre-empting the technology’s longevity and impact while also providing a silhouette of human psychology. Social media platforms have seen tremendous growth in recent years, resulting in a surge of user-generated content. Among these platforms, Reddit stands out as a forum for users to engage in discussions on various topics, including Generative Artificial Intelligence (GAI) and chatbots. Traditional pedagogy for social media sentiment analysis and opinion mining are time consuming and resource heavy, while lacking representation. This paper provides a novice multifrontal approach that utilises and integrates various techniques for better results. The data collection and preparation are done through the Reddit API in tandem with multi-stage weighted and stratified sampling. NLP (Natural Language processing) techniques encompassing LDA (Latent Dirichlet Allocation), Topic modelling, STM (Structured Topic Modelling), sentiment analysis and emotional analysis using RoBERTa are deployed for opinion mining. To verify, substantiate and scrutinise all variables in the dataset, multiple hypothesises are tested using ANOVA, T-tests, Kruskal–Wallis test, Chi-Square Test and Mann–Whitney U test. The study provides a novel contribution to the growing literature on social media sentiment analysis and has significant new implications for discerning user experience and engagement with AI chatbots like ChatGPT.</span></p>Shreyansh PadarhaVijayalakshmi S.
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-06-282024-06-281010.4108/eetiot.6447A Mobile Lens: Voice-Assisted Smartphone Solutions for the Sightless to Assist Indoor Object Identification
https://publications.eai.eu/index.php/IoT/article/view/6450
<p class="ICST-abstracttext"><span lang="EN-GB">Every aspect of life is organized around sight. For visually impaired individuals, accidents often occur while walking due to collisions with people or walls. To navigate and perform daily tasks, visually impaired people typically rely on white cane sticks, assistive trained guide dogs, or volunteer individuals. However, guide dogs are expensive, making them unaffordable for many, especially since 90% of fully blind individuals live in low-income countries. Vision is crucial for participating in school, reading, walking, and working. Without it, people struggle with independent mobility and quality of life. While numerous applications are developed for the general public, there is a significant gap in mobile on-device intelligent assistance for visually challenged people. Our custom mobile deep learning model shows object classification accuracy of 99.63%. This study explores voice-assisted smartphone solutions as a cost-effective and efficient approach to enhance the independent mobility, navigation, and overall quality of life for visually impaired or blind individuals.</span></p>Talal SaleemV. Sivakumar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-06-282024-06-281010.4108/eetiot.6450Big Mart Sales Prediction using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/6453
<p>INTRODUCTION: Sales prediction, also known as revenue forecasting or sales forecasting, refers to the process of accurately and timely estimating future revenue for manufacturers, distributors, and retailers, providing them with valuable insights. Sales prediction plays a crucial role in various industries, particularly in sectors such as retail, automotive leasing, real estate transactions, and other conventional businesses. <br>OBJECTIVES: This paper focuses on developing a sales prediction model for Big Mart, a supermarket chain, using machine learning algorithms. The developed model aims to provide Big Mart with accurate sales forecasts, enabling better decision-making, improved profitability, and enhanced customer service.<br>METHODS: The study utilises the CRISP-DM methodology and explores various machine learning algorithms, including Linear Regression, Decision Tree, Random Forest, XGBoost, Stacked Ensemble Model, and K-Nearest Neighbours (KNN). The dataset used for model development is sourced from Kaggle and includes information about products, stores, and sales. Pre-processing techniques are applied to handle missing data and feature engineering.<br>RESULTS: The XGBoost Regression Model Tuned with RandomizedSearchCV outperforms the existing models with an RMSE of 1018.82 and an R² of 0.6181.<br>CONCLUSION: This research contributes to the field of sales forecasting in the retail industry and provides insights for businesses looking to enhance their revenue prediction capabilities.</p>Koh Ya WenMinnu Helen JosephV. Sivakumar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-06-282024-06-281010.4108/eetiot.6453Reimagining Accessibility: Leveraging Deep Learning in Smartphone Applications to Assist Visually Impaired People Indoor Object Distance Estimation
https://publications.eai.eu/index.php/IoT/article/view/6501
<p>Every other aspect of life is organized around the sight. A person with vision impairment suffers severely from independent mobility and quality of life. The proposed framework combines mobile deep learning with distance estimation algorithms to detect and classify indoor objects with estimated distance in real-time during indoor navigation. The user, wearing the device with a lanyard or holding it in a way that positions the camera forward, identifies in real-time surroundings indoor objects with estimated distance and voice commentary. Moreover, the mobile framework provides an estimated distance to the obstacles and suggests a safe navigational path through voice-guided feedback. By harnessing the power of deep learning in a mobile setting, this framework aims to improve the independence of visually impaired individuals by facilitating them a higher degree of independence in indoor navigation. This study's proposed mobile object detection and distance estimation framework achieved 99.75% accuracy. This research contributes by leveraging mobile deep learning with identifying objects in real-time, classification and distance estimation, a state-of-the-art approach to use the latest technologies to enhance indoor mobility challenges faced by visually impaired people.</p>Talal SaleemV Sivakumar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-032024-07-031010.4108/eetiot.6501Constructing an Intelligent Environmental Monitoring and Forecasting System: Fusion of Deep Neural Networks and Gaussian Smoothing
https://publications.eai.eu/index.php/IoT/article/view/6519
<p>To enhance monitoring of environmental indicators like temperature, humidity, and carbon dioxide (CO 2) concentration in data centers, this study evaluates various deep neural network (DNN) models and improves their forecast accuracy using Gaussian smoothing. Initially, multiple DNN architectures were assessed. Following these evaluations, the optimal algorithm was selected for each indicator: CNN for<br />temperature, LSTM for humidity, and a hybrid LSTM-GRU model for CO 2 concentration. These models underwent further refinement through Gaussian smoothing and re-training to enhance their forecasting capabilities. The results demonstrate that Gaussian smoothing significantly enhanced forecast accuracy across all indicators. For instance, R 2 values notably increased: the temperature forecast improved from 0.59925 to 0.98012, humidity from 0.63305 to 0.99628, and CO 2 concentration from 0.71204 to 0.99855. Thus, this study highlights the potential of DNN models in environmental monitoring after Gaussian smoothing, providing precise forecasting tools and real-time monitoring support for informed decision-making in the future.</p>Ruey-Chyi Wu
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-082024-08-081010.4108/eetiot.6519Proof-of-resource: A resource-efficient consensus mechanism for IoT devices in blockchain networks
https://publications.eai.eu/index.php/IoT/article/view/6565
<p>In this paper, we propose an innovative, lightweight, and energy-efficient consensus mechanism, Proof-of-Resource (PoR), custom-designed for Internet of Things (IoT) devices in blockchain networks. As IoT's integration with blockchain faces hurdles such as scalability, resource efficiency, and security, conventional blockchain consensus mechanisms prove unsuitable due to IoT devices' resource limitations. The PoR is a breakthrough that capitalizes on IoT device resources' inherent capabilities to achieve consensus, thus enabling secure and efficient data exchange while minimizing resource consumption. Our paper presents the comprehensive design of PoR, discussing aspects like initialization, resource verification, consensus protocol, validator selection, block validation, and rewards. Through a simulation involving fifteen IoT devices, we demonstrate that PoR effectively addresses key challenges in IoT-blockchain integration, signifying a significant step forward in enabling blockchain technology for IoT systems.</p>Mahmoud AbbasiJavier PrietoMarta Plaza-HernandezJuan Manuel Corchado
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-092024-07-091010.4108/eetiot.6565Synthetic Malware Using Deep Variational Autoencoders and Generative Adversarial Networks
https://publications.eai.eu/index.php/IoT/article/view/6566
<p>The effectiveness of detecting malicious files heavily relies on the quality of the training dataset, particularly its size and authenticity. However, the lack of high-quality training data remains one of the biggest challenges in achieving widespread adoption of malware detection by trained machine and deep learning models. In response to this challenge, researchers have made initial strides by employing generative techniques to create synthetic malware samples. This work utilizes deep variational autoencoders (VAE) and generative adversarial networks (GAN) to produce malware samples as opcode sequences. The generated malware opcodes are then distinguished from authentic opcode samples using machine and deep learning techniques as validation methods. The primary objective of this study was to compare synthetic malware generated using VAE and GAN technologies. The results showed that neither approach could create synthetic malware that could deceive machine learning classification. However, the WGAN-GP algorithm showed more promise by requiring a higher number of synthetic malware samples in the train set to effectively be detected, proving it<br />a better approach in synthetic malware generation.</p>Aaron ChoiAlbert GiangSajit JumaniDavid LuongFabio Di Troia
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-092024-07-091010.4108/eetiot.6566Mitigating Adversarial Reconnaissance in IoT Anomaly Detection Systems: A Moving Target Defense Approach based on Reinforcement Learning
https://publications.eai.eu/index.php/IoT/article/view/6574
<p>The machine learning (ML) community has extensively studied adversarial threats on learning-based systems, emphasizing the need to address the potential compromise of anomaly-based intrusion detection systems (IDS) through adversarial attacks. On the other hand, investigating the use of moving target defense (MTD) mechanisms in Internet of Things (IoT) networks is ongoing research, with unfathomable potential to equip IoT devices and networks with the ability to fend off cyber attacks despite their computational deficiencies. In this paper, we propose a game-theoretic model of MTD to render the configuration and deployment of anomaly-based IDS more dynamic through diversification of feature training in order to minimize successful reconnaissance on ML-based IDS. We then solve the MTD problem using a reinforcement learning method to generate the optimal shifting policy within the network without a prior network transition model. The state-of-the-art ToN-IoT dataset is investigated for feasibility to implement the feature-based MTD approach. The overall performance of the proposed MTD-based IDS is compared to a conventional IDS by analyzing the accuracy curve for varying attacker success rates. Our approach has proven effective in increasing the resilience of the IDS against adversarial learning.</p>Arnold OseiYaser Al MtawaTalal Halabi
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-102024-07-101010.4108/eetiot.6574Scalable Image Clustering to screen for self-produced CSAM
https://publications.eai.eu/index.php/IoT/article/view/6631
<p>The number of cases involving Child Sexual Abuse Material (CSAM) has increased dramatically in recent years, resulting in significant backlogs. To protect children in the suspect’s sphere of influence, immediate identification of self-produced CSAM among acquired CSAM is paramount. Currently, investigators often rely on an approach based on a simple metadata search. However, this approach faces scalability limitations for large cases and is ineffective against anti-forensic measures. Therefore, to address these problems, we bridge the gap between digital forensics and state-of-the-art data science clustering approaches. Our approach enables clustering of more than 130,000 images, which is eight times larger than previous achievements, using commodity hardware and within an hour with the ability to scale even further. In addition, we evaluate the effectiveness of our approach on seven publicly available forensic image databases, taking into account factors such as anti-forensic measures and social media post-processing. Our results show an excellent median clustering-precision (Homogeinity) of 0.92 on native images and a median clustering-recall (Completeness) of over 0.92 for each test set. Importantly, we provide full reproducibility using only publicly available algorithms, implementations, and image databases.</p>Samantha KlierHarald Baier
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-152024-07-151010.4108/eetiot.6631Massively Parallel Evasion Attacks and the Pitfalls of Adversarial Retraining
https://publications.eai.eu/index.php/IoT/article/view/6652
<p>Even with widespread adoption of automated anomaly detection in safety-critical areas, both classical and advanced machine learning models are susceptible to first-order evasion attacks that fool models at run-time (e.g. an automated firewall or an anti-virus application). Kernelized support vector machines (KSVMs) are an especially useful model because they combine a complex geometry with low run-time requirements (e.g. when compared to neural networks), acting as a run-time lower bound when compared to contemporary models (e.g. deep neural networks), to provide a cost-efficient way to measure model and attack run-time costs. To properly measure and combat adversaries, we propose a massively parallel projected gradient descent (PGD) evasion attack framework. Through theoretical examinations and experiments carried out using linearly-separable Gaussian normal data, we present (i) a massively parallel naive attack, we show that adversarial retraining is unlikely to be an effective means to combat an attacker even on linearly separable datasets, (ii) a cost effective way of evaluating models defences and attacks, and an extensible code base for doing so, (iii) an inverse relationship between adversarial robustness and benign accuracy, (iv) the lack of a general relationship between attack time and efficacy, and (v) that adversarial retraining increases compute time exponentially while failing to reliably prevent highly-confident false classifications.</p>Charles MeyersTommy LöfstedtErik Elmroth
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-07-172024-07-171010.4108/eetiot.6652A Machine Learning-Based Method for Predicting the Classification of Aircraft Damage
https://publications.eai.eu/index.php/IoT/article/view/6936
<p class="ICST-abstracttext">Efficient and accurate classification of aircraft damage is paramount in ensuring the safety and reliability of air transportation. This research uses a machine learning-based approach tailored to predict the classification of aircraft damage with high precision and reliability to achieve data-driven insights as input for the improvement of safety standards. Leveraging a diverse dataset encompassing various types and severities of damage instances, our methodology harnesses the power of machine learning algorithms to discern patterns and correlations within the data. The approach involves using extensive datasets consisting of various structural attributes, flight data, and environmental conditions. The Random Forest algorithm, Support Vector Machine, and Neural Networks methods used in the research are more accurate than traditional methods, providing detailed information on the factors contributing to damage severity. By using machine learning, maintenance schedules can be optimized and flight safety can be improved. This research is a significant step toward predictive maintenance, which is poised to improve safety standards in the aerospace industry.</p>Imron RosadiFreddy FranciscusMuhammad Hadi Widanto
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-152024-08-151010.4108/eetiot.6936Artificial Intelligence Application on Aircraft Maintenance: A Systematic Literature Review
https://publications.eai.eu/index.php/IoT/article/view/6938
<p class="ICST-abstracttext" style="margin-left: 14.2pt;">Maintenance is an essential aspect of supporting aircraft operations. However, there are still several obstacles and challenges in the process, such as incomplete technical record data, irregular maintenance schedules, unscheduled component replacement, unavailability of tools or components, recurring problems, and a long time for troubleshooting. Digitalization and the massive use of artificial intelligence (AI) in various sectors have been widely carried out in the industry 5.0 era today, especially in the aviation industry. It offers several advantages to optimize aircraft maintenance and operations, such as predictive maintenance, fault detection, failure diagnosis, and intelligent monitoring systems. The utilization of AI has the potential to solve obstacles and challenges in aircraft maintenance activities, such as improving aircraft reliability, reducing aircraft downtime, improving safety, and reducing maintenance costs. This research uses the Systematic Literature Review method, which aims to review and provide an understanding of objectives, strategies, methods, and equipment objects involved in the application of AI in aircraft maintenance and repair scope. The findings and understanding from this research can be used as a basis for utilizing or adopting AI in aircraft maintenance to be more targeted and efficient in the future. This study reviews and presents research trends from reputable journals and proceedings screened using a unique protocol.</p>Erna Shevilia AgustianZastra Alfarezi Pratama
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-152024-08-151010.4108/eetiot.6938Enhancing User Query Comprehension and Contextual Relevance with a Semantic Search Engine using BERT and ElasticSearch
https://publications.eai.eu/index.php/IoT/article/view/6993
<p>This research paper explores the development of a semantic search engine designed to enhance user query comprehension and deliver contextually applicable research results. Classic search engines basically struggle to catch the nuanced meaning of user queries, giving to suboptimal results. To address this challenge, we give the merge of advanced natural language processing (NLP) techniques with Elasticsearch, and with a specific focus on Bidirectional Encoder Representations from Transformers (BERT), a state-of-the-art pre-trained language model. Our approach involves leveraging BERT's ability to analyze the contextual meaning of words within documents by sentence transformers as (SBERT) , enabling the search engine to grab the user queries and better under- stand semantics of the content as it is converted into vector embeddings making it understandable in the Elasticsearch server. By utilizing BERT's bidirectional attention mechanism, the search engine can discern the relationships between words, thereby capturing the contextual nuances that are crucial for accurate query interpretation. Through experimental validation and performance assessments, we demonstrate the efficacy of our semantic search engine in providing contextually relevant search results. This research contributes to the advancement of search technology by enhancing the intelligence of search engines, ultimately improving the user experience by giving context based research.</p>Saniya M ladanavarRitu KambleR.H GoudarRohit. B. KaliwalVijayalaxmi RathodSanthosh L DeshpandeDhananjaya G MAnjanabhargavi Kulkarni
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-212024-08-211010.4108/eetiot.6993Enhancing Education with ChatGPT: Revolutionizing Personalized Learning and Teacher Support
https://publications.eai.eu/index.php/IoT/article/view/6998
<p class="ICST-abstracttext"><span lang="EN-GB">As we embrace the digital age, artificial intelligence (AI) has converted an essential share of our breaths, and teaching is no allowance. ChatGPT, OpenAI's cutting-edge language processing AI, ChatGPT, stands at the forefront of transforming our approach to education. This article delves into the myriad ways in which ChatGPT can assist educators in reshaping their teaching methodologies and enhancing classroom interactions. in providing personalized learning experiences, simplifying complex concepts, and enhancing student engagement. We also discuss real-world examples of its successful implementation and its potential future in the education sector. However, we also admit the limits of ChatGPT and the need careful consideration before its implementation. This article explores the support and impact of ChatGPT in education. It showcases real-world implementations and discusses the future potential of AI, particularly ChatGPT, in transforming teaching methodologies and classroom interactions. By emphasizing the role of technology in enhancing education, it highlights how AI, such as ChatGPT, can bring about positive transformations in today's classrooms.</span></p>Dhananjaya G MR H GoudarGovindaraja KRohit B KaliwalVijayalaxmi K RathodSanthosh L Deshpande Anjanabhargavi KulkarniGeetabai S Hukkeri
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-212024-08-211010.4108/eetiot.6998Speech Emotion Recognition using Extreme Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/4485
<p class="ICST-abstracttext"><span lang="EN-GB">Detecting Emotion from Spoken Words (SER) is the task of detecting the underlying emotion in spoken language. It is a challenging task, as emotions are subjective and highly contextual. Machine learning algorithms have been widely used for SER, and one such algorithm is the Gaussian Mixture Model (GMM) algorithm. The GMM algorithm is a statistical model that represents the probability distribution of a random variable as a sum of Gaussian distributions. It has been widely used for speech recognition and classification tasks. In this article, we offer a method for SER using Extreme Machine Learning (EML) with the GMM algorithm. EML is a type of machine learning that uses randomization to achieve high accuracy at a low computational cost. It has been effectively utilised in various classification tasks. For the planned approach includes two steps: feature extraction and emotion classification. Cepstral Coefficients of Melody Frequency (MFCCs) are used in order to extract features. MFCCs are commonly used for speech processing and represent the spectral envelope of the speech signal. The GMM algorithm is used for emotion classification. The input features are modelled as a mixture of Gaussians, and the emotion is classified based on the likelihood of the input features belonging to each Gaussian. Measurements were taken of the suggested method on the The Berlin Database of Emotional Speech (EMO-DB) and achieved an accuracy of 74.33%. In conclusion, the proposed approach to SER using EML and the GMM algorithm shows promising results. It is a computationally efficient and effective approach to SER and can be used in various applications, such as speech-based emotion detection for virtual assistants, call centre analytics, and emotional analysis in psychotherapy.</span></p>Valli Madhavi KotiKrishna MurthyM SuganyaMeduri Sridhar SarmaGollakota V S S Seshu KumarBalamurugan N
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-272023-11-271010.4108/eetiot.4485Milk Quality Prediction Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/4501
<p>Milk is the main dietary supply for every individual. High-quality milk shouldn't contain any adulterants. Dairy products are sold everywhere in society. Yet, the local milk vendors use a wide range of adulterants in their products, permanently altering the evaporated. Using milk that has gone bad can have serious health consequences. On October 18 of this year, the Food Safety and Standards Authority of India (FSSAI), the nation's top food safety authority, released the final result of the National Milk Safety and Quality Survey (NMSQS) and declared the milk readily available in India to be "mostly safe." According to an FSSAI survey, 68.4% of the milk in India is tainted. The quality of milk cannot be checked by any equipment or special system. Milk that has not been pasteurized has not been treated to get rid of harmful bacteria. Infected raw milk may contain Salmonella, Campylobacter, Cryptosporidium, E. coli, Listeria, Brucella, and other dangerous pathogens. These microorganisms pose a major risk to your family's health. Manually analyzing the various milk constituents can be very challenging when determining the quality of the milk. Analyzing and discovering with the aid of machine learning can help with this endeavor. Here a machine learning-based milk quality prediction system is developed. The proposed technology has shown 99.99% classification accuracy.</p>Drashti BhavsarYash JobanputraNirmal Keshari SwainDebabrata Swain
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-11-292023-11-291010.4108/eetiot.4501Indian Budget 2022: A Make-or-Break Moment for Cryptocurrency
https://publications.eai.eu/index.php/IoT/article/view/4540
<p class="ICST-abstracttext"><span lang="EN-GB">People are liable to the tax rate if they transfer digital assets during a specific fiscal year. There is no distinction between income from businesses and investments or between short-term and long-term gains because the 30% tax rate is applicable regardless of the sort of income. By clearly stating how it would be charged, the Indian budget 2022 has provided some direction. Losses were consequently experienced by both new and old cryptocurrency buyers. Under Section 115 BBH, it is illegal to offset cryptocurrency losses with cryptocurrency gains—or any other gains or revenue, for that matter. The implementation of the 30% tax rule on digital assets has caused the collapse of the cryptocurrency market, and there is a possibility that investors will continue to suffer losses in the future.</span></p>Preethi NanjundanBlesson Varghese JamesJossy P GeorgeDilpreet Kaur KukrejaYugjeet Singh Goyal
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-052023-12-051010.4108/eetiot.4540Security in Mobile Network: Issues, Challenges and Solutions
https://publications.eai.eu/index.php/IoT/article/view/4542
<p>INTRODUCTION: Mobile devices are integrated into daily activities of people's life. Compared to desktop computers the growth of mobile devices is tremendous in recent years. The growth of mobile devices opens vast scope for attackers on these devices.</p><p>OBJECTIVES: This paper presents a deep study of different types of security risks involved in mobile devices and mobile applications. </p><p>METHODS: In this paper we study various mechanisms of security risks for the mobile devices and their applications. We also study how to prevent these security risks in mobile devices.</p><p>RESULTS: Various solutions are provided in paper through which operators can protect the security and privacy of user data and keep their customers' trust by implementing these procedures.</p><p>CONCLUSION: This paper concludes with their solutions for providing a secure mobile network. This paper is structured as follows. Section 2 contains related work. Section 3 describes security problems. Section 4 discusses defensive methods and Section 5 gives the conclusion.</p>Ruby DahiyaAnjali KashyapBhupendra SharmaRahul Kumar SharmaNidhi Agarwal
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-062023-12-061010.4108/eetiot.4542A Systematic Review on Various Task Scheduling Algorithms in Cloud Computing
https://publications.eai.eu/index.php/IoT/article/view/4548
<p class="ICST-abstracttext"><span lang="EN-GB">Task scheduling in cloud computing involves allocating tasks to virtual machines based on factors such as node availability, processing power, memory, and network connectivity. In task scheduling, we have various scheduling algorithms that are nature-inspired, bio-inspired, and metaheuristic, but we still have latency issues because it is an NP-hard problem. This paper reviews the existing task scheduling algorithms modelled by metaheuristics, nature-inspired algorithms, and machine learning, which address various scheduling parameters like cost, response time, energy consumption, quality of services, execution time, resource utilization, makespan, and throughput, but do not address parameters like trust or fault tolerance. Trust and fault tolerance have an impact on task scheduling; trust is necessary for tasks and assigning responsibility to systems, while fault tolerance ensures that the system can continue to operate even when failures occur. A balance of trust and fault tolerance gives a quality of service and efficient task scheduling; therefore, this paper has analysed parameters like trust and fault tolerance and given research directions.</span></p>Mallu Shiva Rama KrishnaSudheer Mangalampalli
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-062023-12-061010.4108/eetiot.4548An Effective analysis on various task scheduling algorithms in Fog computing
https://publications.eai.eu/index.php/IoT/article/view/4589
<p>Fog computing involved as an extension of cloud and distributed systems fog nodes allowing data to be processed closer to the edge device and reduces the latency and bandwidth, storage capacity of IoT tasks<strong>.</strong> Task scheduling in fog computing involves allocating the tasks in fog nodes based on factors such as node availability, processing power, memory, and network connectivity. In task scheduling we have various scheduling algorithms that are nature inspired and bio-inspired algorithms but still we have latency issues because it is an NP-hard problem. This paper reviews the existing task scheduling algorithms modeled by metaheuristic, nature inspired and machine learning which address the various scheduling parameters like cost, response time, energy consumption, quality of services, execution time, resource utilization, makespan, throughput but still parameters like trust, fault tolerance not addressed by many of the existing authors. Trust and fault tolerance gives an impact and task scheduling trust is necessary to tasks and assign responsibility to systems, while fault tolerance ensures that the system can continue to operate even when failures occur. A balance of trust and fault tolerance gives a quality of service and efficient task scheduling therefore this paper done analysis on parameters like trust, fault tolerance and given research directions.</p>Prashanth ChopparaSudheer Mangalampalli
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4589Efficient SDN-based Task offloading in fog-assisted cloud environment
https://publications.eai.eu/index.php/IoT/article/view/4591
<p>A distributed computing model called "fog computing" provides cloud-like services which is closer to end devices, and is rapidly gaining popularity. It offers cloud-like computing including storage capabilities, but with less latency and bandwidth requirements, thereby improving the computation capabilities of IoT devices and mobile nodes. In addition, fog computing offers advantages such as support for context awareness, scalability, dependability, and node mobility. Fog computing is frequently used to offload tasks from end devices' applications, enabling quicker execution utilizing the fog nodes' capabilities. Because of the changing nature of the fog environment, task offloading is challenging and the multiple QoS criteria that depend on the type of application being used. This article proposes an SDN-based offloading technique to optimize the task offloading technique for scheduling and processing activities generated by the Internet of Space Things (IoST) devices. The proposed technique utilizes Software-Defined Networking (SDN) optimization to dynamically manage network resources and to facilitate the deployment and execution of offloaded tasks. To model the system which computes the optimal virtual machines (VM) to be allocated in the fog network in order to actively process the offloaded tasks, the GI/G/r queueing model is utilised. This approach minimizes the delay-sensitive task queue and minimises the necessary number of VMs while minimising the waiting time for the fog layer. The findings of the simulation are used to verify the effectiveness of the proposed model.</p>Bibhuti Bhusan DashRabinarayan SatpathySudhansu Shekar Patra
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4591Comparative Analysis of Deep Learning Models for Accurate Detection of Plant Diseases: A Comprehensive Survey
https://publications.eai.eu/index.php/IoT/article/view/4595
<p>Agriculture plays an important role towards the economic growth of any nation. It also has a significant effect on global GDP. The enhancement in agro production helps in controlling greatly the inflation. Today a large percentage of population from rural India is still dependent on agriculture. But every year there is a huge loss happen in agriculture due to different plant diseases. A farmer does not able to recognise any plant disease at its beginning stage due to insufficient knowledge. Sometimes they take help of agriculture officers in this process. However, if the infection level has grown by that point, it typically leads to a significant crop loss. Also the diagnosis made by the agriculture officer based on their past experience, is always not accurate. Computational vision-based solutions can be used to deal with this great disaster to a large extent. Computer vision mainly deals with different algorithms that enable a computer to identify a hidden pattern for recognition using image or video data. In this work a detailed investigation has been performed on the different computer vision based solutions proposed by different authors to detect various crop diseases.</p>Amol BhilareDebabrata SwainNiraj Patel
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-132023-12-131010.4108/eetiot.4595Enhancing Crop Growth Efficiency through IoT-enabled Smart Farming System
https://publications.eai.eu/index.php/IoT/article/view/4604
<p class="ICST-abstracttext"><span lang="EN-GB">The agricultural sector is facing significant challenges in meeting the increasing demands for food production while ensuring sustainability and resource efficiency. To address these challenges, the integration of Internet of Things (IoT) technology into farming practices has gained attention as a promising solution. This research focuses on the development and implementation of an IoT-enabled smart farming system aimed at enhancing crop growth efficiency. The proposed system leverages IoT sensors and devices to monitor and collect real-time data on various parameters such as environmental conditions, soil moisture levels, and crop health. The collected data is then analyzed using advanced analytics techniques to gain valuable insights and make informed decisions regarding irrigation, fertilization, and pest control. By utilizing IoT technology, farmers can optimize their resource utilization, reduce waste, and maximize crop productivity. This research aims to investigate the potential benefits and challenges associated with implementing the IoT-enabled smart farming system. In this paper, a cutting-edge Internet of Things (IoT) technology is explored for monitoring weather and soil conditions for efficient crop development. The system was built to monitor temperature, humidity, and soil moisture using Node MCU and several linked sensors. Additionally, a Wi-Fi connection is used to send a notification through SMS to the farmer's phone about the field's environmental state. The results will help in developing strategies and guidelines for the widespread adoption of IoT-enabled smart farming practices, ultimately leading to sustainable and efficient crop production to meet the demands of a growing population.</span></p>Neda JadhavRajnivas BSubapriya VSivaramakrishnan SS Premalatha
Copyright (c) 2023 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2023-12-142023-12-141010.4108/eetiot.4604Personalized recognition system in online shopping by using deep learning
https://publications.eai.eu/index.php/IoT/article/view/4810
<p class="ICST-abstracttext"><span lang="EN-GB">This study presents an effective monitoring system to watch the Buying Experience across multiple shop interactions based on the refinement of the information derived from physiological data and facial expressions. The system's efficacy in recognizing consumers' emotions and avoiding bias based on age, race, and evaluation gender in a pilot study. The system's data has been compared to the outcomes of conventional video analysis. The study's conclusions indicate that the suggested approach can aid in the analysis of consumer experience in a store setting.</span></p>Manjula Devarakonda VenkataPrashanth DondaN. Bindu MadhaviPavitar Parkash SinghA. Azhagu Jaisudhan PazhaniShaik Rehana Banu
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-102024-01-101010.4108/eetiot.4810Investigation of early symptoms of tomato leaf disorder by using analysing image and deep learning models
https://publications.eai.eu/index.php/IoT/article/view/4815
<p>Despite rapid population growth, agriculture feeds everyone. To feed the people, agriculture must detect plant illnesses early. Predicting crop diseases early is unfortunate. The publication educates farmers about cutting-edge plant leaf disease-reduction strategies. Since tomato is a readily accessible vegetable, machine learning and image processing with an accurate algorithm are used to identify tomato leaf illnesses. This study examines disordered tomato leaf samples. Based on early signs, farmers may quickly identify tomato leaf problem samples. Histogram Equalization improves tomato leaf samples after re sizing them to 256 × 256 pixels. K-means clustering divides data space into Voronoi cells. Contour tracing extracts leaf sample boundaries. Discrete Wavelet Transform, Principal Component Analysis, and Grey Level Co-occurrence Matrix retrieve leaf sample information.</p>Surendra Reddy VintaAshok Kumar KoshariyaSampath Kumar SAdityaAnnantharao Gottimukkala
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-102024-01-101010.4108/eetiot.4815DPSO: A Hybrid Approach for Load Balancing using Dragonfly and PSO Algorithm in Cloud Computing Environment
https://publications.eai.eu/index.php/IoT/article/view/4826
<p class="ICST-abstracttext"><span lang="EN-GB" style="font-size: 10.0pt; font-family: 'Times New Roman',serif; font-weight: normal;">Load balancing is one of the promising challenges in cloud computing system. For solving the issues, many heuristic, meta heuristic, evolutionary and hybrid algorithms have been proposed by the researchers. Still, it is under way of research for finding optimal solution in dynamic change in behaviour of task as well as computing environments. Attempts have been made to develop a hybrid framework to balance the load in cloud environment by obtain the best fitness value. To achieve an optimal resource for load balancing, the proposed framework integrates Dragonfly (DF) and Particle Swarm Optimization (PSO) algorithm. The performance of the proposed method is compared with PSO and Dragonfly algorithm. The performance is evaluated in different measures such as best fitness value, response time by varying the user base and response time. The user bases are varied from 50, 100, 500, and 1000. Similarly, the population size has been varied to observe the performance of the algorithm. It is observed that the proposed method outperforms the other approached for load balancing. The statistical analysis and standard testing also validate the relative superiority of PSO a considerable Dragonfly Algorithm. The hybrid approach provides better response time.</span></p>Subasish MohapatraSubhadarshini MohantyHriteek Kumar NayakMillan Kumar MallickJanjhyam Venkata Naga RameshKhasim Vali Dudekula
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-112024-01-111010.4108/eetiot.4826Crime Prediction using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5123
<p>The process of researching crime patterns and trends in order to find underlying issues and potential solutions to crime prevention is known as crime analysis. This includes using statistical analysis, geographic mapping, and other approaches of type and scope of crime in their areas. Crime analysis can also entail the creation of predictive models that use previous data to anticipate future crime tendencies. Law enforcement authorities can more efficiently allocate resources and target initiatives to reduce crime and increase public safety by evaluating crime data and finding trends. For prediction, this data was fed into algorithms such as Linear Regression and Random Forest. Using data from 2001 to 2016, crime-type projections are made for each state as well as all states in India. Simple visualisation charts are used to represent these predictions. One critical feature of these algorithms is identifying the trend-changing year in order to boost the accuracy of the predictions. The main aim is to predict crime cases from 2017 to 2020 by using the dataset from 2001 to 2016.</p>Sridharan SSrish NVigneswaran SSanthi P
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-152024-02-151010.4108/eetiot.5123An Efficient Crop Yield Prediction System Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5333
<p class="ICST-abstracttext"><span lang="EN-GB">Farming is considered the biggest factor in strengthening the economy of any country. It also has significant effects on GDP growth. However, due to a lack of information and consultation, farmers suffer from significant crop losses every year. Typically, farmers consult agricultural officers for detecting crop diseases. However, the accuracy of predictions made by agricultural officers based on their experience is not always reliable. If the exact issues are not identified at right time then it results in a heavy crop loss. To address this issue, Computational Intelligence, also known as Machine Learning, can be applied based on historical data. In this study, an intelligent crop yield prediction algorithm is developed using various types of regression-based algorithms. The Crop Yield Prediction Dataset from the Kaggle repository is used for model training and evaluation. Among all different regression methods Random Forest has shown the better performance in terms of R2 score and other errors. </span></p>Debabrata SwainSachin LakumSamrat PatelPramoda PatroJatin
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-072024-03-071010.4108/eetiot.5333Analyse and Predict the Detection of the Cyber - Attack Process by Using a Machine-Learning Approach
https://publications.eai.eu/index.php/IoT/article/view/5345
<p>Crimes committed online rank among the most critical global concerns. Daily, they cause country and citizen economies to suffer massive financial losses. With the proliferation of cyber-attacks, cybercrime has also been on the rise. To effectively combat cybercrime, it is essential to identify its perpetrators and understand their methods. Identifying and preventing cyber-attacks are difficult tasks. To combat these concerns, however, new research has produced safety models and forecast tools grounded on artificial intelligence. Numerous methods for predicting criminal behaviour are available in the literature. While they may not be perfect, they may help in cybercrime and cyber-attack tactic prediction. To find out whether an attack happened and, if so, who was responsible, one way to look at this problem is by using real-world data. There is data about the crime, the perpetrator's demographics, the amount of property damaged, and the entry points for the assault. Potentially, by submitting applications to forensics teams, victims of cyber-attacks may get information. This study uses ML methods to analyse cyber-crime consuming two patterns and to forecast how the specified characteristics will furnish to the detection of the cyber-attack methodology and perpetrator. Based on the comparison of eight distinct machine-learning methods, one can say that their accuracy was quite comparable. The Support Vector Machine (SVM) Linear outperformed all other cyber-attack tactics in terms of accuracy. The initial model gave us a decent notion of the assaults that the victims would face. The most successful technique for detecting malevolent actors was logistic regression, according to the success rate. To anticipate who the perpetrator and victim would be, the second model compared their traits. A person’s chances of being a victim of a cyber-attack decrease as their income and level of education rise. The proposed idea is expected to be used by departments dealing with cybercrime. Cyber-attack identification will also be made easier, and the fight against them will be more efficient.</p>Charanjeet SinghRavinjit SinghShivaputraMohit TiwariBramah Hazela
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-082024-03-081010.4108/eetiot.5345Enhancing Diabetes Prediction with Data Preprocessing and various Machine Learning Algorithms
https://publications.eai.eu/index.php/IoT/article/view/5348
<p class="ICST-abstracttext"><span lang="EN-GB"> </span></p><p class="ICST-abstracttext"><span lang="EN-GB">Diabetes mellitus, usually called diabetes, is a serious public health issue that is spreading like an epidemic around the world. It is a condition that results in elevated glucose levels in the blood. India is often referred to as the 'Diabetes Capital of the World', due to the country's 17% share of the global diabetes population. It is estimated that 77 million Indians over the age of 18 have diabetes (i.e., everyone in eleven) and there are also an estimated 25 million pre-diabetics. One of the solutions to control diabetes growth is to detect it at an early stage which can lead to improved treatment. So, in this project, we are using a few machine learning algorithms like SVM, Decision Tree Classifier, Random Forest, KNN, Linear regression, Logistic regression, Naive Bayes to effectively predict the diabetes. Pima Indians Diabetes Database has been used in this project. According to the experimental findings, Random Forest produced an accuracy of 91.10% which is higher among the different algorithms used.</span></p>Gudluri SaranyaSagar Dhanraj Pande
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-082024-03-081010.4108/eetiot.5348Diabetic Retinopathy Eye Disease Detection Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5349
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Diabetic retinopathy is the name given to diabetes problems that harm the eyes. Its root cause is damage to the blood capillaries in the tissue that is light-sensitive in the rear of the eye. Over time, having excessive blood sugar may cause to the tiny blood capillaries that nourish the retina to become blocked, severing the retina's blood circulation. As a result, the eye tries to develop new blood arteries.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The objective of this research is to analyse and compare various algorithms based on their performance and efficiency in predicting Diabetic Retinopathy. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: To achieve this, an experimental model was developed to predict Diabetic Retinopathy at early stage.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">RESULTS: The results provide valuable insights into the effectiveness and scalability of these algorithms. The findings of this study contribute to the understanding of various algorithm selection and its impact on the overall performance of models.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: The findings of this study contribute to the understanding of multiple algorithm selection and its impact on the overall performance of models’ accuracy. By applying these algorithms, we can predict disease at early stage such that it can be cured efficiently before it goes worse.</span></p>Ruby DahiyaNidhi AgarwalSangeeta SinghDeepanshu VermaShivam Gupta
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-082024-03-081010.4108/eetiot.5349Evaluation of Machine Learning Techniques for Enhancing Scholarship Schemes Using Artificial Emotional Intelligence
https://publications.eai.eu/index.php/IoT/article/view/5368
<p class="ICST-abstracttext"><span lang="EN-GB">This paper investigates the sentiment analysis of the” scholarship system” [4], in Odisha, primarily, to identify why some students do not apply for government-sponsored scholarships. Our research focuses on social media platforms, surveys, and machine learning-based analyses to understand the decision-making process and increase awareness about the various scholarship schemes. The goal of our experiment is to determine the efficacy of sentiment analysis in evaluating the effectiveness of different scholarship schemes. A wide variety of techniques based on dictionaries; corpora lexicons are used in different scholarship schemes for sentiment analysis. Our research paper is based on an evaluation process that could have a positive effect on the government by improving scholarship programs and giving financial aid to students from poor families, which would raise the level of education in Odisha. Our research paper concludes with a summary of successful and unsuccessful schemes, as well as their Word frequency counts and Sentiment Polarity scores.</span></p>P S RajuSanjay Kumar PatraBinaya Kumar Patra
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5368Predicting Academic Success: A Comparative Study of Machine Learning and Clustering-Based Subject Recommendation Models
https://publications.eai.eu/index.php/IoT/article/view/5378
<p>The study of students' academic performance is a significant endeavor for higher education schools and universities since it is essential to the design and management of instructional strategies. The efficacy of the current educational system must be monitored by evaluating student achievement. For this research, we used multiple Machine Learning algorithms and Neural Networks to analyze the learning quality. This study investigates the real results of university examinations for B.Tech (Bachelor in Technology) students, a four-year undergraduate programme in Computer Science and Technology. The K-means clustering approach is used to recommend courses, highlighting those that would challenge students and those that will improve their GPA. The Linear Regression method is used to make a prediction of a student’s rank among their batchmates. Academic planners might base operational choices and future planning on the findings of this study. </p>KinjalSagar Mousam ParidaJayesh SutharSagar Dhanraj Pande
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5378Machine Learning Based Stroke Predictor Application
https://publications.eai.eu/index.php/IoT/article/view/5384
<p>When blood flow to the brain stops or slows down, brain cells die because they don't get enough oxygen and nutrients. This condition is known as an ischemic stroke. It is now the biggest cause of death in the whole planet. Examining the afflicted people has shown a number of risk variables that are thought to be connected to the stroke's origin. Numerous studies have been conducted to predict the illnesses associated with stroke using these risk variables. The prompt identification of various warning symptoms associated with stroke has the potential to mitigate the severity of the stroke. The utilization of machine learning techniques yields prompt and precise predictive outcomes. Although its uses in healthcare are expanding, certain research domains have a stronger need for more study. We think that machine learning algorithms may aid in a deeper comprehension of illnesses and make an excellent healthcare partner. The textual dataset of numerous patients, which includes many medical variables, is gathered for this study. The missing values in the dataset are located and dealt with during processing. The dataset is used to train machine learning algorithms including Random Forest, Decision Tree classifier, and SVM. The method that delivers the greatest accuracy for our dataset is then selected once the accuracy of the algorithms has been determined. This aids patients in determining the likelihood of a brain stroke and ensuring they get the right medical attention.</p>R Kishore KannaCh. Venkata Rami ReddyBhawani Sankar PanigrahiNaliniprava BeheraSarita Mohanty
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5384Dynamic Load Balancing in Cloud Computing: A Review and a Novel Approach
https://publications.eai.eu/index.php/IoT/article/view/5387
<p class="ICST-abstracttext"><span lang="EN-GB">In cloud computing, load balancing is essential since it guarantees effective resource utilisation and reduces response times. It effectively distributes incoming workload across available servers. Load balancing involves dividing tasks among multiple systems or resources over the internet. By distributing traffic and workloads, load balancing ensures that no server or computer is overloaded, underutilized, or idle in a cloud computing environment. To enhance overall performance in the cloud, it optimises a number of variables, including execution time, response time, and system stability. To increase the effectiveness and reliability of cloud services, load balancing evenly distributes traffic, workloads, and computing resources across the environment. The proposed method, Enhanced Dynamic Load Balancing for Cloud Computing, adjusts load distribution and maintains a balanced system by considering variables like server capacity, workload distribution, and system load. By incorporating these factors and employing adaptive threshold adjustment, this approach optimizes resource allocation and enhances system performance. Experimental research shows that the proposed new novel approach is more effective and efficient than current load balancing techniques. In this context of cloud computing, this ground-breaking method offers a workable substitute for dynamic load balancing.</span></p>Jasobanta LahaSabyasachi PattnaikKumar Surjeet Chaudhury
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5387Cloud Based Document Understanding System
https://publications.eai.eu/index.php/IoT/article/view/5390
<p>In recent years, the popularity of cloud-based systems has been on the rise, particularly in the field of document management. One of the main challenges in this area is the need for effective document understanding, which involves the extraction of meaningful information from unstructured data. To address this challenge, we propose a cloud-based document understanding system that leverages state-of-the-art machine learning techniques and natural language processing algorithms.</p><p>This system utilizes a combination of optical character recognition (OCR), text extraction, and machine learning models to extract and classify relevant information from documents. The system is designed to be scalable and flexible, allowing it to handle large volumes of data and adapt to different document types and formats. Additionally, our system employs advanced security measures to ensure the confidentiality and integrity of the processed data.</p><p>This cloud-based document understanding system has the potential to significantly improve document management processes in various industries, including healthcare, legal, and finance.</p>Parth RewooAditya Kumar JaiswalDurvesh MahajanHarshit Naidu
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5390IoT and AI for Silambam Martial Arts: A Review
https://publications.eai.eu/index.php/IoT/article/view/4771
<p class="ICST-abstracttext"><span lang="EN-GB">Silambam is a traditional martial art from Tamil Nadu, India. It is a stick-fighting art that has been practiced for centuries. In recent years, there has been growing interest in using technology to enhance the practice and promotion of Silambam. One way to do this is to use the Internet of Things (IoT). IoT devices can be used to collect data on Silambam practitioners' movements and performance. This data can then be analysed using artificial intelligence (AI) to provide insights and recommendations to practitioners. For example, IoT sensors could be attached to Silambam sticks to track the number of rotations, speed, and force of each strike. This data could then be used to provide feedback to practitioners on their technique and progress. AI could also be used to develop personalized training programs for Silambam practitioners. For example, AI could analyse a practitioner's data to identify areas where they need improvement and then recommend specific exercises or drills. In addition to enhancing the practice of Silambam, IoT and AI could also be used to promote the art to a wider audience. For example, IoT-enabled Silambam sticks could be used to create interactive training games or simulations. Overall, IoT and AI have the potential to revolutionize the way that Silambam is practiced and promoted. By using these technologies, we can make Silambam more accessible, engaging, and effective for practitioners of all levels.</span></p>Vijayarajan RamanathanMeyyappan TGnanasankaran Natarajan
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-01-042024-01-041010.4108/eetiot.4771A Survey about Post Quantum Cryptography Methods
https://publications.eai.eu/index.php/IoT/article/view/5099
<p>Cryptography is an art of hiding the significant data or information with some other codes. It is a practice and study of securing information and communication. Thus, cryptography prevents third party intervention over the data communication. The cryptography technology transforms the data into some other form to enhance security and robustness against the attacks. The thrust of enhancing the security among data transfer has been emerged ever since the need of Artificial Intelligence field came into a market. Therefore, modern way of computing cryptographic algorithm came into practice such as AES, 3DES, RSA, Diffie-Hellman and ECC. These public-key encryption techniques now in use are based on challenging discrete logarithms for elliptic curves and complex factorization. However, those two difficult problems can be effectively solved with the help of sufficient large-scale quantum computer. The Post Quantum Cryptography (PQC) aims to deal with an attacker who has a large-scale quantum computer. Therefore, it is essential to build a robust and secure cryptography algorithm against most vulnerable pre-quantum cryptography methods. That is called ‘Post Quantum Cryptography’. Therefore, the present crypto system needs to propose encryption key and signature size is very large.in addition to careful prediction of encryption/decryption time and amount of traffic over the communication wire is required. The post-quantum cryptography (PQC) article discusses different families of post-quantum cryptosystems, analyses the current status of the National Institute of Standards and Technology (NIST) post-quantum cryptography standardisation process, and looks at the difficulties faced by the PQC community.</p>Jency Rubia JBabitha Lincy REzhil E NithilaSherin Shibi CRosi A
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-122024-02-121010.4108/eetiot.5099A Literature Review for Detection and Projection of Cardiovascular Disease Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5326
<p class="ICST-abstracttext"><span lang="EN-GB">The heart is a vital organ that is indispensable in ensuring the general health and welfare of individuals. Cardiovascular diseases (CVD) are the major health concern worldwide and a leading cause of death, leaving behind diabetes and cancer. To deal with the problem, it is essential for early detection and prediction of CVDs, which can significantly reduce morbidity and mortality rates. Computer-aided techniques facilitate physicians in the diagnosis of many heart disorders, such as valve dysfunction, heart failure, etc. Living in an "information age," every day million bytes of data are generated, and we can turn these data into knowledge for clinical investigation using the technique of data mining. Machine learning algorithms have shown promising results in predicting heart disease based on different risk parameter. In this study, for the purpose of predicting CVDs, our aim is to appraise and examine the outputs generated by machine learning algorithms including support vector machines, artificial neural network, logistic regression, random forest and decision trees.This literature survey highlights the correctness of different machine learning algorithms in forecasting heart problem and can be used as a basis for building a Clinical decision-making aid to detect and prevent heart disease at an early stage.</span></p>Sumati BaralSuneeta SatpathyDakshya Prasad PatiPratiti MishraLalmohan Pattnaik
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-072024-03-071010.4108/eetiot.5326A Review of Machine Learning-based Intrusion Detection System
https://publications.eai.eu/index.php/IoT/article/view/5332
<p class="ICST-abstracttext"><span lang="EN-GB">Intrusion detection systems are mainly prevalent proclivity within our culture today. Interference exposure systems function as countermeasures to identify web-based protection threats. This is a computer or software program that monitors unauthorized network activity and sends alerts to administrators. Intrusion detection systems scan for known threat signatures and anomalies in normal behaviour. This article also analyzed different types of infringement finding systems and modus operandi, focusing on support-vector-machines; Machine-learning; fuzzy-logic; and supervised-learning. For the KDD dataset, we compared different strategies based on their accuracy. Authors pointed out that using support vector machine and machine learning together improves accuracy. </span></p>Nilamadhab MishraSarojananda Mishra
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-072024-03-071010.4108/eetiot.5332Enhanced Security in Public Key Cryptography: A Novel Approach Combining Gaussian Graceful Labeling and NTRU Public Key Cryptosystem
https://publications.eai.eu/index.php/IoT/article/view/4992
<p>This research explores an encryption system that combines the N<sup>th</sup>-degree Truncated Polynomial Ring Unit (NTRU) public key cryptosystem with Gaussian Graceful Labeling. This process assigns distinct labels to a graph's vertices, resulting in successive Gaussian integers. The NTRU method offers enhanced security and efficient key exchange. The communication encryption process depends on integers P, a, and b, with P being the largest prime number in the vertex labeling. The original receivers are the vertex labeling with the largest prime number coefficient, while all other receivers receive messages from the sender. A polynomial algebraic mixing system and a clustering principle based on the abecedarian probability proposition are used in NTRU encryption and decryption. The choice of relatively prime integers p and q in NTRU plays a role in the construction of polynomial rings used for encryption and decryption, with specific choices and properties designed to ensure scheme security.</p>S KavithaG JayalalithaK Sivaranjani
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-02-012024-02-011010.4108/eetiot.4992A Feature Extraction of Photovoltaic Solar Panel monitoring system based on Internet of Things (IoT)
https://publications.eai.eu/index.php/IoT/article/view/5292
<p>INTRODUCTION: The Internet of Things (IoT) is an modern technology that improves user experience and gives items more intelligence. A large number of applications have already embraced the IoT. Our lives were made significantly easier and more accessible by the development of the IoT. In this research a photovoltaic solar panel system has been monitored using IoT.</p><p>OBJECTIVES: The feature extraction of a photovoltaic solar panel monitoring system based on the IoT working process is provided in this work. The implementation of maximum power point tracking (MPPT) algorithm also covered, along with a brief description of the pre-processing method, datasets and the PV system features are extracted.</p><p>METHODS: The model develops a thorough grasp to increase the voltage and current efficiency, a maximum power point tracking technique (MPPT) is implemented in this research study.</p><p>RESULTS: A safer solar panel monitoring system displays the result in LCD display screen it shows various readings, including the IP address, voltage and current rating, light intensity, temperature, and fault occur on the system receive warning message.</p><p>CONCLUSION: The proposed solar panel monitoring system demonstrates high level voltage and current accuracy when compared to the existing method.</p>J SaranyaV Divya
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-042024-03-041010.4108/eetiot.5292Effective Facial Expression Recognition System Using Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5362
<p class="ICST-abstracttext"><span lang="EN-GB">The co Facial expression recognition (FER) is a topic that has seen a lot of study in computer vision and machine learning. In recent years, deep learning techniques have shown remarkable progress on FER tasks. With this abstract, A Novel Is Advised By Us FER method that combines combined use of k-nearest neighbours and long short-term memory algorithms better efficiency and accurate facial expression recognition. The proposed system features two primary steps—feature extraction and classification—to get results. When extracting features, we extract features from the facial images using the Local Binary Patterns (LBP) algorithm. LBP is a simple yet powerful feature extraction technique that captures texture information from the image. In the classification stage, we use the KNN and LSTM algorithms for facial expression recognition. KNN is a simple and effective classification algorithm that finds the k closest to the given value neighbours to the test training-set-sample and assigning it to the class that is most frequent among its neighbours. However, KNN has limitations in handling temporal information. To address this limitation, we propose to use LSTM, which is a subclass of RNNs capable of capturing temporal relationships in time series data. The LSTM network takes as input the LBP features of a sequence of facial images and processes them through a series of LSTM cells to estimate the ultimate coding of the phrase. We examine the planned and system on two publicly available records: the CK+ and the Oulu-CASIA datasets. According on the experimental findings, the proposed system achieves performance at the cutting edge on both datasets. The proposed system performs better than other state-of-the-art methods, including those that use deep learning systems, quantitatively, in terms of F1-score and precision.In conclusion, the proposed FER system that combines KNN and LSTM algorithms achieves high accuracy and an F1 score in recognising facial expressions from sequences of images. This system can be used in many contexts, including human-computer interaction, emotion detection, and behaviour analysis.</span></p>Dheeraj HebriRamesh NuthakkiAshok Kumar DigalK G S VenkatesanSonam ChawlaC Raghavendra Reddy
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-112024-03-111010.4108/eetiot.5362Machine Learning Classifiers for Credit Risk Analysis
https://publications.eai.eu/index.php/IoT/article/view/5376
<p class="ICST-abstracttext" style="margin-left: 0in;"><span lang="EN-GB" style="font-size: 9.0pt;">The modern world is a place of global commerce. Since globalization became popular, entrepreneurs of small and medium enterprises to large ones have looked up to banks, which have existed in various forms since antiquity, as their pillars of support. The risk of granting loans in various forms has significantly increased as a consequence of this, the businesses face financing difficulties. Credit Risk Analysis is a major aspect of approving the loan application that is done by analyzing different types of data. The goal is to minimize the risk of approving the loan for the Individuals or businesses who might not pay back on time. This research paper addresses this challenge by applying various machine learning classifiers to the German credit risk dataset. By evaluating and comparing the accuracy of these models to identify the most effective classifier for credit risk analysis. Furthermore, it proposes a contributory approach that combines the strengths of multiple classifiers to enhance the decision-making process for loan approvals. By leveraging ensemble learning techniques, such as the Voting Ensemble model, the aim is to improve the accuracy and reliability of credit risk analysis. Additionally, it explores tailored feature engineering techniques that focus on selecting and engineering informative features specific to credit risk analysis.</span></p>SudikshaPreethi NanjundanJossy P George
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5376Proper Weather Forecasting Internet of Things Sensor Framework with Machine Learning
https://publications.eai.eu/index.php/IoT/article/view/5382
<p class="ICST-abstracttext"><span lang="EN-GB">Recent times have seen a rise in the amount of focus placed on the configurations of big data and the Internet of Things (IoT). The primary focus of the researchers was the development of big data analytics solutions based on machine learning. Machine learning is becoming more prevalent in this sector because of its ability to unearth hidden traits and patterns, even within exceedingly complicated datasets. This is one reason why this is the case. For the purpose of this study, we applied our Big Data and Internet of Things (IoT)-based system to a use case that involved the processing of weather information. We put climate clustering and sensor identification algorithms into practice by using data that was available to the general public. For this particular application, the execution information was shown as follows:every single level of the construction. The training method that we've decided to use for the package is a k-means cluster that's based on Scikit-Learn. According to the results of the information analyses, our strategy has the potential to be utilized in usefully retrieving information from a database that is rather complicated.</span></p>Anil V TurukmaneSagar Dhanraj Pande
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5382Early Detection of Cardiovascular Disease with Different Machine Learning Approaches
https://publications.eai.eu/index.php/IoT/article/view/5389
<p class="ICST-abstracttext"><span lang="EN-GB">With the increase in mortality rate around the world in recent years, cardiovascular diseases (CVD) have swiftly become a leading cause of morbidity, and therefore there arises a need for early diagnosis of disease to ensure effective treatment. With machine learning emerging as a promising tool for the detection, this study aims to propose and compare various algorithms for the detection of CVD via several evaluation metrics including accuracy, precision, F1 score, and recall. ML has the ability and potential to improve CVD prediction, detection, and treatment by analysis of patient information and identification of patterns that may be difficult for humans to interpret and detect. Several state-of-the-art ML and DL models such as Decision Tree, XGBoost, KNN, and ANN were employed. The results of these models reflect the potential of Machine Learning in the detection of CVD detection and subsequently highlight the need for their integration into clinical practice along with the suggestion of the development of robust and accurate models to improve the predictions. This integration, however, significantly helps in the reduction of the burden of CVD on healthcare systems.</span></p>Eyashita SinghVartika SinghAryan RaiIvan ChristopherRaj MishraK S Arikumar
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5389Color-Driven Object Recognition: A Novel Approach Combining Color Detection and Machine Learning Techniques
https://publications.eai.eu/index.php/IoT/article/view/5495
<p class="ICST-abstracttext"><span lang="EN-GB">INTRODUCTION: Object recognition is a crucial task in computer vision, with applications in robotics, autonomous vehicles, and security systems. </span></p><p class="ICST-abstracttext"><span lang="EN-GB">OBJECTIVES: The objective of this paper is to propose a novel approach for object recognition by combining color detection and machine learning techniques.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">METHODS: The research employs YOLO v3, a state-of-the-art object detection algorithm, and k-means optimized clustering to enhance the accuracy and efficiency of object recognition. RESULTS: The main results obtained in this paper showcase the outperformance of the authors’ approach on a standard object recognition dataset compared to state-of-the-art approaches using only color features. Additionally, the effectiveness of this approach is demonstrated in a real-world scenario of detecting and tracking objects in a video stream.</span></p><p class="ICST-abstracttext"><span lang="EN-GB">CONCLUSION: In conclusion, this approach, integrating color and shape features, has the potential to significantly enhance the accuracy and robustness of object recognition systems. This contribution can pave the way for the development of more reliable and efficient object recognition systems across various applications.</span></p>Aadarsh NayyerAbhinav KumarAayush RajputShruti PatilPooja KamatShivali WagleTanupriya Choudhury
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-212024-03-211010.4108/eetiot.5495Analysis of Student Study Pattern for Personalized Learning using an Innovative Approach
https://publications.eai.eu/index.php/IoT/article/view/6988
<p>In an era of rapid technological advancements, the area of Artificial Intelligence and Machine Learning (AIML) is revolutionizing the way we learn and interact with technology. However, this influx of information can be over- whelming for students, making it challenging to absorb and retain knowledge within a short timeframe. Learning preferences vary greatly from individual to individual, with some students preferring video tutorials, others favouring hands- on practical experiences, and still others relying on traditional textbooks. To ad- dress this diverse range of learning styles, i.e., a need for an interactive application that provides regular assessments following each lesson, regardless of the chosen learning method. This application would analyse each student's performance to identify their most effective learning approach. This personalized approach is particularly valuable in large coaching institutes, where a limited number of instructors cannot effectively monitor the progress of thousands of students simultaneously. By incorporating additional learning materials and implementing specific adjustments, this application can significantly enhance the learning experiences to students and adult learners alike, empowering them to navigate the complexities of technology with greater confidence and ease.</p>Aaryan RaoR.H GoudarDhananjaya G MVijayalaxmi RathodAnjanabhargavi Kulkarni
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-212024-08-211010.4108/eetiot.6988Personalized Book Recommendations: A Hybrid Approach Leveraging Collaborative Filtering, Association Rule Mining, and Content-Based Filtering
https://publications.eai.eu/index.php/IoT/article/view/6996
<p class="ICST-abstracttext"><span lang="EN-GB">Well over ten years already, recommender systems have been in use. Many people have perpetually grappled with synonymous with selecting what to read next. The choice of a textbook or reference book to read on a subject they are unaware of might be difficult for even students. Nowadays, people can go into a library or browse the internet without having a specific book in mind. But each reader is different, in their tastes and interests. In today's information-rich world, Essential tools like recommendation systems play a pivotal role in simplifying the lives of consumers. For book lovers, the Book Recommendation Sys- tem(BRS) is the ideal fix for readers. Online bookstores are competing for attention, but current systems extract unnecessary data and result in low user satisfaction, this author crafted the BRS, merging collaborative filtering(CF), association rule mining(arm), and content-based filtering. BRS delivers recommendations that are both efficient and effective. Concept papers primary intention encourage a love of reading and help people form lifelong habits. BRS selects an ideal book based on a reader's preferences and data from various sources, inspiring individuals to read more and discover new authors and genres. Leveraging Information sets and machine learning algorithms, collaborative filtering and content filtering techniques are used to help people find the perfect book that fascinates and incites a desire to explore additional literary treasures.</span></p>Akash BhajantriNagesh KR. H. GoudarDhananjaya G MRohit.B. KaliwalVijayalaxmi RathodAnjanabhargavi KulkarniGovindaraja K
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-212024-08-211010.4108/eetiot.6996Blockchain Technology for Manufacturing Sector
https://publications.eai.eu/index.php/IoT/article/view/7034
<p class="ICST-abstracttext"><span lang="EN-GB">With technology advancing rapidly, organizations must continuously develop to remain competitive. They invest in technologies such as blockchain, artificial intelligence, machine learning, and cloud computing. This study focuses on the challenges of implementing blockchain technology in the manufacturing sector. Data was collected through structured interviews with production and design managers, as well as employees of organizations using new technologies. The snowball sampling method was employed, and analysis was conducted using the large group decision method. The findings will have significant implications for leveraging blockchain in manufacturing. The study focuses on exploring factors related to opportunities and challenges within the technology organisation's en<span style="letter-spacing: -.05pt;">vironment,</span> <span style="letter-spacing: -.05pt;">addressing</span> existing research gaps. The findings are constrained by the <span style="letter-spacing: -.05pt;">scope</span> <span style="letter-spacing: -.05pt;">of</span> <span style="letter-spacing: -.05pt;">the</span> <span style="letter-spacing: -.05pt;">data</span> <span style="letter-spacing: -.05pt;">series,</span> presenting longitudinal facts. To tackle the prospects and complications highlighted in the study, organizations should make use of this technology to enhance their manufacturing processes.</span></p>Lakshminarayana KPraveen KulkarniPadma S Dandannavar Basavaraj S. TigadiPrayag GokhaleShreekant Naik
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-232024-08-231010.4108/eetiot.7034Priority Controlled Data Transmission in Sensor Networks
https://publications.eai.eu/index.php/IoT/article/view/7038
<p class="ICST-abstracttext"><span lang="EN-GB">In any real time communication the selected protocol has a major role in determining QoS provision. This paper illustrates the working of a protocol which provides real time provision by discovery of multiple paths. The algorithm manages the dynamic changes by computation of alternate routes. Priority of data determines its share of bandwidth. The performance is assessed with different node distributions and scenarios. The protocol resolves congestion, contention, and static route switching limitation. The other advantages include, but not limited to better delivery times, less requirement of bandwidth, and better throughput.</span></p>Mary Cherian
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-232024-08-231010.4108/eetiot.7038LSTM-BIGRU based Spectrum Sensing for Cognitive Radio
https://publications.eai.eu/index.php/IoT/article/view/7041
<p class="ICST-abstracttext"><span lang="EN-GB">There is a shortage of wireless spectrum due to developments in the area of wireless communications as well as the number of users that are using resources. Spectrum sensing is a method that solves the issue of shortage. Deep learning surpasses classical methods in spectrum sensing by enabling autonomous feature learning, which enables the adaptive identification of complicated patterns in radio frequency data for cognitive radio in wireless sensor networks. This innovation increases the system's capacity to manage dynamic, real-time circumstances, resulting in increased accuracy over traditional approaches. Spectrum sensing (SS) using LSTM-BIGRU with gaussian noise has been suggested in this article. Long-term dependencies in sequential data are well- preserved by LSTM due to its dedicated memory cells. In addressing and man- aging long-term dependencies in sequential data, BIGRU's integration enhances the efficacy of the model as a whole. To conduct the investigation, RadioML2016.04C.multisnr open-source dataset was utilized. Whereas, by using RadioML2016.10b open-source dataset, QAM64, QPSK and QAM16 performance evaluation has been investigated. The experimental findings demonstrate that the suggested Spectrum Sensing has better accuracy on the dataset particularly at lower SNRs. The improved spectrum sensing (SS) performance of our suggested model is shown by the evaluation of performance indicators, such as the F1 Score, CKC and Matthew's correlation coefficient, highlighting its potency in the field of spectrum sensing applications.</span></p>E. Vargil VijayK. Aparna
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-232024-08-231010.4108/eetiot.7041Detection of Anomalous Bitcoin Transactions in Blockchain Using ML
https://publications.eai.eu/index.php/IoT/article/view/7042
<p class="ICST-abstracttext"><span lang="EN-GB">An Internet of Things (IoT)-enabled blockchain helps to ensure quick and efficient immutable transactions. Low-power IoT integration with the Bitcoin network has created new opportunities and difficulties for blockchain transactions. Utilising data gathered from IoT-enabled devices, this study investigates the application of ML regression models to analyse and forecast Bitcoin transaction patterns. Several ML regression algorithms, including Lasso Regression, Gradient Boosting, Extreme Boosting, Extra Tree, and Random Forest Regression, are employed to build predictive models. These models are trained using historical Bitcoin transaction data to capture intricate relationships between various transaction parameters. To ensure model robustness and generalisation, cross-validation techniques and hyperparameter tuning are also applied. The empirical results show that the Bitcoin cost prediction of blockchain transactions in terms of time series. Additionally, it highlights the possibility of fusing block- chain analytics with IoT data streams, illuminating how new technologies might work together to enhance financial institutions.</span></p>Soumya Bajpai Kapil Sharma Brijesh Kumar Chaurasia
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-08-232024-08-231010.4108/eetiot.7042RETRACTED ARTICLE: Monitoring of operational conditions of fuel cells by using machine learning
https://publications.eai.eu/index.php/IoT/article/view/5377
<p>This article has been retracted at the request of our research integrity team. You can find the retraction notice at the following link https://doi.org/10.4108/eetiot.7176</p>Andip Babanrao ShroteK Kiran KumarChamandeep KaurMohammed Saleh Al AnsariPallavi SinghBramah HazelaMadhu G C
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-03-122024-03-121010.4108/eetiot.5377Retraction Notice: Monitoring of operational conditions of fuel cells by using machine learning.
https://publications.eai.eu/index.php/IoT/article/view/7176
<p>The article <em>Monitoring of operational conditions of fuel cells by using machine learning</em> has been retracted at the request of EAI's Research Integrity Committee on the grounds of plagiarism.</p>Andip Babanrao ShroteK Kiran KumarChamandeep KaurMohammed Saleh Al AnsariPallavi SinghBramah HazelaMadhu G C
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-09-042024-09-041010.4108/eetiot.7176A Comprehensive analysis of services towards Data Aggregation, Data Fusion and enhancing security in IoT-based smart home
https://publications.eai.eu/index.php/IoT/article/view/6703
<p> </p><p>Data aggregation and sensors data fusion would be very helpful in a number of developing fields, including deep learning, driverless cars, smart cities, and the Internet of Things (IoT). An advanced smart home application will test the upgraded Constrained Application Protocol (CoAP) using Contiki Cooja. Smart home can enhance people’s comfort. Secure authentication between the transmitter and recipient nodes is essential for providing IoT services. In many IoT applications, device data are critical. Current encryption techniques use complicated arithmetic for security. However, these arithmetic techniques waste power. Hash algorithms can authenticate these IoT applications. Mobile protection issues must be treated seriously, because smart systems are automatically regulated. CoAP lets sensors send and receive server data with an energy-efficient hash function to increase security and speed. SHA224, SHA-1, and SHA256 were tested by the CoAP protocol. Proposed model showed that SHA 224 starts secure sessions faster than SHA-256 and SHA-1. The ChaCha ci. This study proposed enhanced ChaCha, a stream cipher for low-duty-cycle IoT devices. For wireless connections between the IoT gateway and sensors with a maximum throughput of 1.5 Mbps, the proposed model employs a wireless error rate (WER) of 0.05; the throughput rises with an increase in the transmission data rate.</p>Arun RanaSumit RanaVikram BaliRashmi Dassardar islamDebendra MuduliRitu DewanAnurag Singh
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-10-022024-10-021010.4108/eetiot.6703A Probabilistic Descent Ensemble for Malware Prediction Using Deep Learning
https://publications.eai.eu/index.php/IoT/article/view/6774
<p>INTRODUCTION: Introducing a Probabilistic Descent Ensemble (PDE) approach for enhancing malware prediction through deep learning leverages the power of multiple neural network models with distinct architectures and training strategies to achieve superior accuracy while minimizing false positives. OBJECTIVES: Combining Stochastic Gradient Descent (SGD) with early stopping is a potent approach to optimising deep learning model training. Early stopping, a vital component, monitors a validation metric and halts training if it stops improving or degrades, guarding against overfitting.</p><p>METHODS: This synergy between SGD and early stopping creates a dynamic framework for achieving optimal model performance adaptable to diverse tasks and datasets, with potential benefits including reduced training time and enhanced generalization capabilities.</p><p>RESULTS: The proposed work involves training a Gaussian NB classifier with SGD as the optimization algorithm. Gaussian NB is a probabilistic classifier that assumes the features follow a Gaussian (normal) distribution. SGD is an optimization algorithm that iteratively updates model parameters to minimize a loss function.</p><p>CONCLUSION: The proposed work gives an accuracy of 99% in malware prediction and is free from overfitting and local minima.</p><p> </p><p> </p>R. Vinoth KumarR. Suguna
Copyright (c) 2024 EAI Endorsed Transactions on Internet of Things
https://creativecommons.org/licenses/by/3.0/
2024-10-012024-10-011010.4108/eetiot.6774