Enhancing IoT Security: A Comparative Analysis of Hybrid Hyperparameter Optimization for Deep Learning-Based Intrusion Detection Systems
محورهای موضوعی : Machine learningHeshamt Asadi 1 * , Mahmood Alborzi 2 , Hessam Zandhessami 3
1 - 1.Department of Management and Economics, Science and Research Branch, Islamic Azad University, Tehran, Iran.
2 - 1.Department of Management and Economics, Science and Research Branch, Islamic Azad University, Tehran, Iran.
3 - 1.Department of Management and Economics, Science and Research Branch, Islamic Azad University, Tehran, Iran.
کلید واژه: Internet of Things, Intrusion Detection System, Hyperparameter Optimization, Deep Learning, Harmony Search, Bayesian Optimization.,
چکیده مقاله :
Rapidly expanding domains such as the Internet of Things require sophisticated approaches to securing interconnected devices against cyber threats. The following study intends to fill in a crucial gap in the state of effective intrusion detection systems for the Internet of Things based on a comparison and analysis of various hyperparameter optimization approaches to improve existing and future detection systems. In other words, our main goal was to investigate and compare various hyperparameter optimization strategies to find and assess the most effective way to improve the performance of deep learning -based IDS. Our methodology was comprised of the following comparative optimization analysis used to compare a hybrid optimization approach against stand-alone implementation of Harmony Search and Bayesian Optimization. The analysis was done quantitatively based on IDS trained and tested on simulated Internet of Things network data, and IDS performance was evaluated by the following metrics : accuracy, precision, recall, and F1 score. The comparison of results showed that the hybrid optimization demonstrated the best performance indicators in terms of accuracy at 99.74%, precision at 99.7%, recall at 99.72%, and F1 score at 99.71%. The results of the study confirm the efficiency of implementing multiple optimization approaches and reveal the potential effectiveness of such combination for effective hyperparameter optimization of deep learning -based IDS in the Internet of Things environment.
Rapidly expanding domains such as the Internet of Things require sophisticated approaches to securing interconnected devices against cyber threats. The following study intends to fill in a crucial gap in the state of effective intrusion detection systems for the Internet of Things based on a comparison and analysis of various hyperparameter optimization approaches to improve existing and future detection systems. In other words, our main goal was to investigate and compare various hyperparameter optimization strategies to find and assess the most effective way to improve the performance of deep learning -based IDS. Our methodology was comprised of the following comparative optimization analysis used to compare a hybrid optimization approach against stand-alone implementation of Harmony Search and Bayesian Optimization. The analysis was done quantitatively based on IDS trained and tested on simulated Internet of Things network data, and IDS performance was evaluated by the following metrics : accuracy, precision, recall, and F1 score. The comparison of results showed that the hybrid optimization demonstrated the best performance indicators in terms of accuracy at 99.74%, precision at 99.7%, recall at 99.72%, and F1 score at 99.71%. The results of the study confirm the efficiency of implementing multiple optimization approaches and reveal the potential effectiveness of such combination for effective hyperparameter optimization of deep learning -based IDS in the Internet of Things environment.
[1] M. S. Sani and A. K. Bardsiri, “Providing a New Smart Camera Architecture for Intrusion Detection in Wireless Visual Sensor Network,” Journal of Information Systems and Telecommunication, vol. 11, no. 1, 2023, doi: 10.52547/jist.15672.11.41.31.
[2] H. Nandanwar and R. Katarya, “Deep learning enabled intrusion detection system for Industrial IOT environment,” Expert Syst Appl, vol. 249, p. 123808, Sep. 2024, doi: 10.1016/j.eswa.2024.123808.
[3] M. F. Elrawy, A. I. Awad, and H. F. A. Hamed, “Intrusion detection systems for IoT-based smart environments: a survey,” Journal of Cloud Computing, vol. 7, no. 1, p. 21, Dec. 2018, doi: 10.1186/s13677-018-0123-6.
[4] A. Kaushik and H. Al-Raweshidy, “A novel intrusion detection system for internet of things devices and data,” Wireless Networks, vol. 30, no. 1, pp. 285–294, Jan. 2024, doi: 10.1007/s11276-023-03435-0.
[5] R. Rathna, L. M. Gladence, J. S. Cynthia, and V. M. Anu, “Energy Efficient Cross Layer MAC Protocol for Wireless Sensor Networks in Remote Area Monitoring Applications,” Journal of Information Systems and Telecommunication, vol. 9, no. 35, 2021, doi: 10.52547/jist.9.35.207.
[6] S. Alosaimi and S. M. Almutairi, “An Intrusion Detection System Using BoT-IoT,” Applied Sciences, vol. 13, no. 9, p. 5427, Apr. 2023, doi: 10.3390/app13095427.
[7] V. Choudhary, S. Tanwar, and T. Choudhury, “Evaluation of contemporary intrusion detection systems for internet of things environment,” Multimed Tools Appl, vol. 83, no. 3, pp. 7541–7581, Jan. 2024, doi: 10.1007/s11042-023-15918-5.
[8] A. Awajan, “A Novel Deep Learning-Based Intrusion Detection System for IoT Networks,” Computers, vol. 12, no. 2, p. 34, Feb. 2023, doi: 10.3390/computers12020034.
[9] R. Chaganti, W. Suliman, V. Ravi, and A. Dua, “Deep Learning Approach for SDN-Enabled Intrusion Detection System in IoT Networks,” Information, vol. 14, no. 1, p. 41, Jan. 2023, doi: 10.3390/info14010041.
[10] E. Anthi, L. Williams, M. Slowinska, G. Theodorakopoulos, and P. Burnap, “A Supervised Intrusion Detection System for Smart Home IoT Devices,” IEEE Internet Things J, vol. 6, no. 5, pp. 9042–9053, Oct. 2019, doi: 10.1109/JIOT.2019.2926365.
[11] T. Saba, A. Rehman, T. Sadad, H. Kolivand, and S. A. Bahaj, “Anomaly-based intrusion detection system for IoT networks through deep learning model,” Computers and Electrical Engineering, vol. 99, p. 107810, Apr. 2022, doi: 10.1016/j.compeleceng.2022.107810.
[12] M. Almiani, A. AbuGhazleh, A. Al-Rahayfeh, S. Atiewi, and A. Razaque, “Deep recurrent neural network for IoT intrusion detection system,” Simul Model Pract Theory, vol. 101, p. 102031, May 2020, doi: 10.1016/j.simpat.2019.102031.
[13] M. Nazarpour, N. Nezafati, and S. Shokouhyar, “Detection of Attacks and Anomalies in the Internet of Things System using Neural Networks Based on Training with PSO Algorithms, Fuzzy PSO, Comparative PSO and Mutative PSO,” Journal of Information Systems and Telecommunication, vol. 10, no. 40, 2022, doi: 10.52547/jist.16307.10.40.270.
[14] A. Heidari and M. A. Jabraeil Jamali, “Internet of Things intrusion detection systems: a comprehensive review and future directions,” Cluster Comput, vol. 26, no. 6, pp. 3753–3780, Dec. 2023, doi: 10.1007/s10586-022-03776-z.
[15] D. Musleh, M. Alotaibi, F. Alhaidari, A. Rahman, and R. M. Mohammad, “Intrusion Detection System Using Feature Extraction with Machine Learning Algorithms in IoT,” Journal of Sensor and Actuator Networks, vol. 12, no. 2, p. 29, Mar. 2023, doi: 10.3390/jsan12020029.
[16] A. Kumar, K. Abhishek, M. R. Ghalib, A. Shankar, and X. Cheng, “Intrusion detection and prevention system for an IoT environment,” Digital Communications and Networks, vol. 8, no. 4, pp. 540–551, Aug. 2022, doi: 10.1016/j.dcan.2022.05.027.
[17] S. U. Jan, S. Ahmed, V. Shakhov, and I. Koo, “Toward a Lightweight Intrusion Detection System for the Internet of Things,” IEEE Access, vol. 7, pp. 42450–42471, 2019, doi: 10.1109/ACCESS.2019.2907965.
[18] H. Alazzam, A. Sharieh, and K. E. Sabri, “A feature selection algorithm for intrusion detection system based on Pigeon Inspired Optimizer,” Expert Syst Appl, vol. 148, p. 113249, Jun. 2020, doi: 10.1016/j.eswa.2020.113249.
[19] R. Zhao et al., “A Novel Intrusion Detection Method Based on Lightweight Neural Network for Internet of Things,” IEEE Internet Things J, vol. 9, no. 12, pp. 9960–9972, 2022, doi: 10.1109/JIOT.2021.3119055. [20] S. M. Kasongo, “A deep learning technique for intrusion detection system using a Recurrent Neural Networks based framework,” Comput Commun, vol. 199, pp. 113–125, Feb. 2023, doi: 10.1016/j.comcom.2022.12.010.
[21] V. Hnamte, H. Nhung-Nguyen, J. Hussain, and Y. Hwa-Kim, “A Novel Two-Stage Deep Learning Model for Network Intrusion Detection: LSTM-AE,” IEEE Access, vol. 11, pp. 37131–37148, 2023, doi: 10.1109/ACCESS.2023.3266979.
http://jist.acecr.org ISSN 2322-1437 / EISSN:2345-2773 |
Journal of Information Systems and Telecommunication
|
Enhancing IoT Security: A Comparative Analysis of Hybrid Hyperparameter Optimization for Deep Learning-Based Intrusion Detection Systems |
Heshmat Asadi1, Mahmood Alborzi1*, Hesam Zandhesami1
|
1.Department of Management and Economics, Science and Research Branch, Islamic Azad University, Tehran, Iran. |
Received: 23 May 2024/ Revised: 04 Aug 2024/ Accepted: 22 Sep 2024 |
|
Abstract
Rapidly expanding domains such as the Internet of Things require sophisticated approaches to securing interconnected devices against cyber threats. The following study intends to fill in a crucial gap in the state of effective intrusion detection systems for the Internet of Things based on a comparison and analysis of various hyperparameter optimization approaches to improve existing and future detection systems. In other words, our main goal was to investigate and compare various hyperparameter optimization strategies to find and assess the most effective way to improve the performance of deep learning -based IDS. Our methodology was comprised of the following comparative optimization analysis used to compare a hybrid optimization approach against stand-alone implementation of Harmony Search and Bayesian Optimization. The analysis was done quantitatively based on IDS trained and tested on simulated Internet of Things network data, and IDS performance was evaluated by the following metrics : accuracy, precision, recall, and F1 score. The comparison of results showed that the hybrid optimization demonstrated the best performance indicators in terms of accuracy at 99.74%, precision at 99.7%, recall at 99.72%, and F1 score at 99.71%. The results of the study confirm the efficiency of implementing multiple optimization approaches and reveal the potential effectiveness of such combination for effective hyperparameter optimization of deep learning -based IDS in the Internet of Things environment.
Keywords: Internet of Things; Intrusion Detection System; Hyperparameter Optimization; Deep Learning; Harmony Search; Bayesian Optimization.
1- Introduction
Background Information: The evolution of Internet of Things technologies enabled humans to engage with their environment at an unprecedented level, with boundless connectivity and information exchange between various devices and platforms. Such integration allows for numerous applications, ranging from smart home and healthcare to industrial and environmental monitoring . The growing complexity of the technology, however, increases the possibility of numerous threats appearing within the environment. Intrusion detection systems are crucial for IoT protection, as they analyze network traffic and notify when a threat is detected [1]. Deep learning algorithms and artificial intelligence technologies have substantially increased the efficiency of such systems, capable of detecting new, sophisticated threats that the traditional approach would miss . As a subset of machine learning , deep learning utilizes multiple layers in neural networks, i.e. deep architectures, to gain insight and decision-making capacity based on vast amounts of data . Such an approach is particularly necessary for the context of the IoT technology, where the sheer number of interoperable devices creates complexity that traditional security measures cannot overcome [2], [3].
With this in mind, the fact that the IoT environments are dynamic and heterogeneous, and the landscape of cyber threats is changing, there is a tremendous need for advanced IDS solutions that will be capable of responding to new challenges. In this regard, the use of AI and deep learning algorithms in IDS is incredibly promising since this can represent one of the ways of developing proactive approaches to threat detection and threat management, which are critical for the resiliency and security of IoT systems. The security measures such as IDSs are essential with the ever-growing scope of IoT in all segments of our lives. Ultimately, AI, and more exactly deep learning techniques, give a perfect example of how security challenges could be coped with in the intricate IoT ecosystem [4], [5].
Gaps Identified: In summary, while broad strides have been achieved to improve the security of IoT with deep learning-based IDS, there exist several critical gaps on both the research and implementation front. The independence of the IoT environments from their variability and dynamism is a dimension not fully addressed with the current IDS. The independence of IoT devices from the contexts in which they are used is impossible, because the enormous number of the contexts demands the IDS systems to be highly adaptable and consequently, scalable. Solutions to current IDS have a problem with the identification of new patterns of threat attack in volatile settings [6]. deep learning has a huge performance limitation in their ability to identify new patterns of threat invasion. This significantly affects its efficiency in mitigating zero-day attacks. The need for large-scale already annotated datasets for training deep learning for better performance is a major setback for ensuring predictability in IDS against new vectors. Thus, alternative strategies to enhance IDS predictability are necessary with independence from historical data [4], [7].
On top of that, the high computational intensity of deep-learning-based algorithms creates another issue, mainly for low-power devices, such as those in the IoT. Implementing advanced IDS solutions that require enormous computational power would mean there is overinvestment in the limited compute capabilities of thousands of IoT devices, which would then lead to inefficiencies or even disruption of service delivery. In addition, there are several controversial ethical and privacy considerations associated with developing and deploying AI- and deep learning-based solutions. The amount of data with which AI-based IDS solutions deal means that the likelihood of misuse and exploitation grows as well. The last issue with all these IDS solutions is the lack of standardized dataset and performance benchmarks tailored for those systems. This, of course, is a gap that limits the depth of coverage in which new deep-learning-based IDS solutions can be investigated for. All these conveyed gaps certainly outline the area in which further research and improvements have to be considered in order to enable the effective use of deep learning and AI for truly enhanced IoT security. As seen, due to the substantial number of challenges and issues, additional research still needs to be done in order to implement IDS solutions for the IoT that are effective, efficient, provide the capabilities to adapt to the growing threats, and can uphold the level of protections to meet privacy and security needs adequately [8], [9].
Research Question or Hypothesis Given the identified literature gaps within intrusion detection systems for the Internet of Things that are established using deep and artificial intelligence, the study's hypothesis is based on the question: How is the most optimal model developed and the implementation of deep learning and artificial intelligence improved toward the enhancement of adaptability, efficiency, and scaling of IDS across different settings of IoT systems to fight zero-day attacks, computational limits, and privacy issues? The question above could be further divided into a set of different sub-questions that are highly important toward the advancement of the field. These are: To what extent can the various architectural changes resulting in DL models make them more susceptible to changes in the environment of the IoT? How does one ensure that the deep learning-based IDS fully removes the threat of zero-day detections without the need for a huge number of pre-labeled data sets? What kind of changes can be made to the existing IDS deep learning algorithms so that their use in IoT environments, largely in an environment where most of the time the computational intensity limit is used? Lastly, how can AI and DL be used with IDS without ignoring the privacy of IoT users?
Moreover, we hypothesize that:
· Federated learning approaches for the increase in adaptability and scalability of IDS in IoT. Hence, federated learning models help the IDS use data from both center and distributed sources, hence reducing privacy risks related to the centralization. Based IDS can be deployed over a wide network without jeopardizing the data's security.
· Better detection of zero-day attacks will be significantly improved using the unsupervised learning and anomaly detection techniques: unsupervised techniques that do not require labeling of data will help in the detection of deviations from normal states that characterize the zero-day attacks.
· Integrated cluster detection Overcoming computational constraints experienced by IoT devices: light neural network models and edge computing: the neural network models have to be light to handle data that imitates sequences and complex networking structures in real time.
· DS and SMPC in the analysis: data privacy in IDS in IoT will be achieved by using the DS and SMPC in the analysis step: differential privacy and SMPC take care of the ubiquitous privacy problem regarding data collected as well as analyzed before and after implementing an IDS.
Objectives or Aims of the Study: The primary goal of this research is to tackle the major challenges for developing responsive, efficient, and adaptive intrusion detection systems for the Internet of Things using deep learning algorithms and artificial intelligence[10] . The study is expected to accomplish the following objectives: Developed IDS architectures Develop and experiment different IDS models that are capable of dynamically adapting to IoT environments as they are heterogeneous and continue to grow. It involves designing deep learning architectures that can learn variations in types of devices and IoT operating contexts and scalable architectures that can expand with the growth of IoT[11]. Improved Zero-day attacks detection Develop new methods for early detection of zero-day attacks using artificial intelligence and deep learning. It involves developing unsupervised and semi-supervised learning models that can learn and identify new attacks from anomalies detected without relying on pre-trained labeled datasets. Optimization of Computational Efficiency Develop IoT architectures aiming to minimize the computational costs for an intrusion detection system. Develop efficient deep learning models or models that can infer and detect attacks in real-time without relying on computing power at the central level. It aims to employ edge computing for threat detection in real-time and reduce system latency. Safety of User Privacy Develop an intrusion detection system that maintains user privacy. It involves developing intrusion detection models that do not require user data being sent to the center. It also involves the use of privacy-preserving techniques for secure multi-party computation and differential privacy of data processed in the inference system. Benchmarking of intrusion detection system performance Develop a benchmark for testing and evaluating new models of deep intrusion detection systems. Setting up a testing database for IoT log data and benchmarking metadata to evaluate the performance of the model and making research quality evaluation. Real-world Application Use the developed IDS in real-life Internet of Things applications and do field trials. Use the intrusion detection model to detect threats and intrusions in the field setup of IoT [12], [13].
Significance of the Study: The implication of this study is more extensive and has potential to revolutionize the state of affairs surrounding the security of IoT ecosystems to realize the following: 1. Improve the security of IoT-end devices and networks. Primarily, the study targets enhancing the security of IoT devices and networks against malicious threats of varying level of sophistication, including the zero-day attacks. Notably, improving the adaptability, efficiency, and scalability of the IDS using deep learning and artificial intelligence would reduce the vulnerability of IoT ecosystems to possible compromise and, eventually uphold data integrity and confidentiality across various applications. 2. Contribute to developments in deep learning and artificial intelligence for cybersecurity. The study is also set to offer valuable insights into the architectural adjustments, the unsupervised learning approach, and the design of lightweight models, which would be pivotal or beneficial to the sustained development of deep learning and artificial intelligence, primarily the devoted to cybersecurity. 3. Promote user privacy and ethical data usage for IoT systems. The content of this study would proffer counter-narrative on the privacy critique of IDS in IoT. Ideally, the adoption of privacy-enhancing technologies like federated learning, and differential privacy would indicate that it is possible to entrench robust cybersecurity mechanisms without necessarily compromise the privacy of the user. It implies that the research would set ethics precedence for AI-centered security system development. 4. Aid in the deployment of real-world IoT security applications. This study could be instrumental in the practical security positioning in the IoT-dependent application. The model created in this research is lightweight and computationally friendly to edge devices, which would signal the adoption of advanced security standards in real-world implementation.
Structure: The article is meticulously structured to provide a seamless flow in navigating the complexities surrounding the use of deep learning algorithms and artificial intelligence to enhance intrusion detection systems in the IoT sector. In an attempt to provide a coherent and comprehensive understanding of the subject matter, the organization of the article is as follows; * Introduction : This part of the article introduces the reader to the topic of discussion and provides an avenue for understanding the background information regarding the relevance and underlying claims surrounding IDS in IoT. It also provides a synchronized evaluation of research questions, recommendations and the significance of the study, and the study objectives. * Literature review : The second part entails a comprehensive review of other people’s work and involves a candid examination of current research patterns, methodologies, and results in the application of deep learning and AI in IoT development systems. It includes a discussion on research trends, emerging issues, and ongoing gaps that provide a broad-based understanding of where general knowledge on the topic lies within the academic debate. * Methodology : This section outlines the research design structure and details the deep learning methods selection, data collection and preparation, and evaluation methods used to demarcate the performance of the proposed IDS solutions. This section is essential in offering insights on how the authors implemented the research. * Results : Discussion: This section highlights the research’s practical outcomes by detailing the data analysis techniques and model performance, which acts as evidence to prove that the proposed IDS enhancements are necessary components. It rides the platform for the interpretation of the outcomes. Discussion: This part of the article gives the meaning of the results obtained from the study and offers a linkage between the outcomes, the current IoT scenes in terms of security and accordance’s of the proposed methods to be adopted. It also provides a comparison of the outcomes of this research to those of other people. The article also has a conclusion, in which the key findings are summarized and implications of the results and limitations key the author’s insights, and the recommendation for the percolation. The article concludes by restating the relevance of AI and deep learning in IoT IDS upgrade. References: This marks the end by providing a comprehensive collection of all references used in compiling the subsequent pages, thus, providing an avenue for more reading.
2- Literature Review
Overview of the Topic: The advent of the Internet of Things and sophisticated computational technology has provided limitless possibilities for creative solutions in every field: from healthcare, smart cities to industrialization processes automation. However, the cybersecurity aspect remains the most critical here, and in this context, it implies the use of reliable, efficient, custom-built Intrusion Detection Systems for IoT . The need for IDS in IoT is due to the potential exposure of interconnected devices and the complexity of modern cyber threats. The development of Deep Learning and Artificial Intelligence technologies has created new opportunities to improve the efficiency, versatility, and predictive value of IDS, making the security of IoT devices more stable and intelligent [14].
Historical Context: The phenomena of IDS have a long history that begins in the early days of computer networks when simple anomaly detection scripts turned into sophisticated systems capable of real-time threat analysis and mitigation. Developed for traditional IT infrastructure, IDS was based on signature-based detection mechanisms; however, the appearance of IoT technologies threatened these systems with regular high heterogeneity, resource constraints, and unique attack vectors. As a result, IDS development paradigm changed: against the background of these problems, DL and AI were integrated into IDS, through using neural networks’ ability to learn from complex data sets and to recognize patterns that signal malicious activity [15].
Key Themes and Findings: There is a large body of research dedicated to the utilization of DL and AI to improve the performance and efficiency of IDS in IoT systems. The comparison studies of different methods suggest that due to their advanced pattern recognition and memorization capabilities, DL and AI are highly superior to existing methods in detecting known and unknown threats. As an illustration, convolutional neural networks and recurrent neural networks are capable of identifying intricate attacks patterns that cannot be identified by traditional means. However, the nature of IoT networks is highly dynamic, and the number of devices and configurations constantly change. The conducted research proposes the use of adaptive learning methods that do not require a complete retraining to account for new data. Scalability, another prominent challenge of IDS, is addressed though distributed and federated learning methods that enable the processing of data in a much more efficient manner but ensuring the privacy of information across multiple IoT devices. Lastly, considering the fact that analyzes have limited computational resources, studies propose the use of lightweight DL models, which offer a good balance between accuracy and computational costs. The models can be scaled down through model pruning, quantization, and knowledge distillation techniques . Due to the use of sensitive data, research also emphasizes the importance of privacy thereby proposing the use of differential privacy and secure multi-party computation to this end [16].
The critical analysis section of our literature review on “Intrusion Detection Systems in the Internet of Things Using Deep Learning Algorithms and Artificial Intelligence” marks a shift to more sophisticated occupying what other scholars have done. Like many previous examples, our analytical discourse is not a reproduction of these studies’ findings and methodologies per se; we want to interact with them, considering their strengths and weaknesses, and differences in the results they have led to. Specifically, we want to deconstruction the research layers that give us knowledge of what has been researched regarding IDS in IoT ecosystems, since this understanding is changing the ways the problems are addressed with deep learning algorithms and AI. The heuristic territory between classical and contemporary research gives us a systematic review of studies dealing with the cyber-security problem in IoT environments through diverse methodologies that had led to non-generalizable conclusions. Concurrently, we use these studies as a heuristic lens for our interpretation of the unclosed issues or controversies and limits in representing their findings. This kind of analysis is an exploration of the acknowledged literature that guides us in the development of flexible IDS perspectives.
Paper 1: Toward a Lightweight Intrusion Detection System for the Internet of Things [17].
Research objective: This paper pursuits to develop an intrusion detection system that is lightweight in the context of its computational requirement to act appropriately in the Internet of Things environment. Additionally, this research’s sole mesh is to address the resource-conscious system that can detect denial of service (DoS) and other malicious activities existing in the Internet of Things but without complicating the relatively low computational ICT due to the Internet of Things devices capacities to detect anomalies across the environment. The methodology explores the supervised machine learning algorithm using SVM with the methodology relies on the manned features that are extracted from the network traffic within the IoT environment. Specifically, it focuses on packets from sender to source rates for the confirmation of the normality of network traffic. The emerged features that are generated by use of packet’s incoming rate are then greeted by the SWM to separate network traffic seen on training day as both non-intrusion and intrusion are seen. Features extracted from simulated IoT network traffic for training the IDS include the packet’s rate and time taken in the network respectively. Data used/to simulated Description of the dataset used as previously outlined, this study involves a set of unique to generated to simulate network traffic data in an Internet of Things environment. The data used is simulated, normal traffic flow within the IoT network, and data simulated from different several attacks to the IoT network. The normalcy and the decayed statuses within the network are constrained to the scorched acceptance rules learnt by the SVM classifier seen at the architecture. The results and Key Findings/limitations, challenges Additionally, the results show the SWM classifier using the among other. The lack use of the defined features from packet’s incoming rates in the environment is seen to be effectively adequate in a good percentage rate of network malicious traffic within the simulated scenarios.
Paper 2: A feature selection algorithm for intrusion detection system based on Pigeon Inspired Optimizer [18].
Research Objective: The purpose of this study was to develop a wrapper feature selection algorithm for Intrusion Detection Systems based on pigeon inspired optimizer . This study aimed to enhance the feature selection process in IDS to improve the model’s accuracy and efficiency by eliminating irrelevant and redundant features. Methodology: The researchers proposed an innovative method to binarize the Pigeon inspired optimizer continuous form to select suitable features in IDS. The methodology involved the comparison between a novel binarization method based on cosine similarity and the conventional binarization method, which is based on the function of sigmoid to convert the continuous swarm intelligence algorithms to discrete forms that are appropriate for discrete problems. Data set description: The researchers used three of the record datasets for evaluation which are KDDCUP 99, NSL-KDD, and UNSW-NB15. These three datasets are among the most famous in the network security field to test and validate the IDS models. The purpose of selecting these three datasets was to prove the strength and applicability of the new algorithm for feature selection through various types of network intrusion data. Key findings and results: The three datasets ‘performance has been extraordinary after using the proposed feature selection algorithm. The performance depends on four things, which are true positive rates, false positive rates, accuracy, and F-score. The organizers used cosine similarity, and its performance was high with faster convergence. The performance is excellent and promising since the algorithm will decrease the big data’s dimensionality and keep the IDS accuracy high.
Paper 3: A Novel Intrusion Detection Method Based on Lightweight Neural Network for Internet of Things [19].
Research Objective: The goal of this research is to propose a lightweight neural network-based intrusion detection system for the Internet of Things concept and develop a new methodology for detection. Its purpose is to circumvent the current challenge of limited computational power and energy resources of IoT devices to detect malicious activities promptly and efficiently, minimizing the impact on device performance. Method: This research proposes a new intrusion detection system framework based on a lightweight neural network model. The research methodology comprises three major stages, data preprocessing, feature selection, and classification. The ultimate goal of data preprocessing is to clean IoT traffic data and normalize it prior to any further analysis. Feature selection aims to identify and select the essential features that are the most meaningful and constitute principal pieces of evidence reflecting the network conditions for detecting attack or slow features, thus diminishing computational efforts of analyzing the data for IoT devices. Lastly, in the classification phase, after the stages of data preprocessing and feature selection, a lightweight neural network model is implemented to classify normal network traffic or malicious data epochs. Data Description: The research uses the available IoT dataset on the public domain. The dataset simulates typical scenarios of both normal traffic and attack attempts in IoT networks, gathered from traffic flow of a wide array of IoT devices. Moreover, it contains a variety of attacks such as DoS, DDoS, MITM and theft of data. This dataset is excellent for an experiment as it encompasses a broad range of conditions, and possible situations for the optimal evaluation of the model. Key Findings: The proposed lightweight neural network-based prediction system demonstrated a high detection rate on all appointed scenarios without utilizing much computational power. It also proved to detect most of the adverse known and previously unseen effects without occurring many false positives. These results suggest that lightweight but valuable prediction systems can be used with IoT systems in the future.
Paper 4: A Deep Learning Technique for Intrusion Detection System Using a Recurrent Neural Networks Based Framework [20].
Research Objectives: The present study aims to improve the security of the network system by installing an Intrusion Detection System framework in Machine Learning models, that include Long-Short Term Memory in this study, as testable few-shot benchmarks, the Gated Recurrent Unit, and Simple RNN. The goal of this framework is to recognize emerging forms of cyberattacks, given the current complex mode of the state-of-the-art technologies for networking and communication. Methodology: The study methodology involves minimizing the feature dimension for classification using a Machine Learning model called XGBoost and then applying the aforementioned RNNs for feature extraction and classification. The proposed IDS framework is designed to handle a large feature space by relying on feature selection algorithms based on . The framework’s performance is evaluated based on test accuracy, validation accuracy, F1-Score, and measures. Datasets: The study uses two benchmark datasets: NSL-KDD and UNSW-NB15. These datasets cover a wide range of attack categories as well as normal traffic patterns, allowing researchers to have a basis for comparison with the proposed IDS framework. NSL-KDD has been recorded for DoS, Probe, R2L, and U2R, whereas UNSW-NB15 has virtually all the other categories of major attacks, including Exploits, Shellcode, and Reconnaissance. Key Findings/Results: The findings from the multiclass and binary classification tasks show that our proposed IDS framework outperformed the benchmark with high test accuracy. Our model significantly outperforms when the classification is done using binary classification when the XGBoost-LSTM model detects normal and attack traffic using the NSL-KDD dataset. On the other hand, with the UNSW-NB15 dataset, the XGBoost-GRU model identifies types of attack traffic more effectively. Limitations/Challenges: In future, feature dimensions are likely to grow rapidly, and attacks patterns always evolve, limiting the network to maintain high detection accuracies. Furthermore, using benchmark datasets may freeze the system from unseen attacks. On future extensions, it can be proposed that a more modern feature ranking approach may be used, followed by the model to outshine the test and train datasets with unseen networks.
In order to summarize and compare the research studies done on intrusion detection systems made for the Internet of Things, I have compiled a brief on appraisal in a tabular form. The table is aimed to prompt the reader on the main goals, methods used, datasets employed to test, the core outcomes, performance measures, and issues or limitations revealed. Through this comparison, it is easier to apprehend the contribution, advantages, and gaps for development in each study.
Table 1: Comparison of algorithms.
Toward a Lightweight Intrusion Detection System for the Internet of Things[17] | ||
Research Objective | Develop a lightweight IDS for IoT that detects DoS attacks without burdening IoT devices' computational resources. | |
Methodology | Supervised machine learning using SVM, focusing on packet arrival rates for feature extraction. | |
Data Set Description | Simulated IoT network traffic to mimic normal and attack scenarios. | |
Key Findings/Results | High classification accuracy with a low computational footprint, suitable for constrained IoT environments. | |
Performance Metrics | Classification accuracy, detection speed. | |
Limitations and Challenges | Dependency on simulated data may not capture real-world IoT complexities; focus on packet arrival rates may miss sophisticated attacks. | |
A Feature Selection Algorithm for Intrusion Detection System Based on Pigeon Inspired Optimizer[18] | ||
Research Objective | Optimize IDS feature selection using the pigeon inspired optimizer (PIO) to enhance model accuracy and efficiency. | |
Methodology | Novel approach to binarize the PIO for effective feature selection in IDS; comparison of cosine similarity with traditional sigmoid function. | |
Data Set Description | KDDCUP 99, NSL-KDD, and UNSW-NB15 datasets. | |
Key Findings/Results | Superior performance in reducing data dimensionality while maintaining high detection accuracy; faster convergence with cosine similarity. | |
Performance Metrics | TPR, FPR, accuracy, F-score. | |
Limitations and Challenges | Predefined datasets may not represent real-world network dynamics; computational complexity of PIO. | |
A Novel Intrusion Detection Method Based on Lightweight Neural Network for Internet of Things[21] | ||
Research Objective | Develop a lightweight neural network-based IDS for IoT, addressing computational and energy constraints. | |
Methodology | Data preprocessing, feature selection, and classification using a lightweight neural network model optimized for low overhead. | |
Data Set Description | Public IoT dataset simulating normal and attack scenarios. | |
Key Findings/Results | High accuracy in detecting various intrusion attempts with minimal false positives; feasibility of deployment in resource-constrained IoT devices. | |
Performance Metrics | Detection rate, computational footprint. | |
Limitations and Challenges | Dependence on dataset quality and diversity; challenge of balancing detection accuracy with computational efficiency. | |
A Deep Learning Technique for Intrusion Detection System Using a Recurrent Neural Networks Based Framework[20] | ||
Research Objective | Enhance network security with an IDS framework employing various RNNs to detect new and evolving network attacks. | |
Methodology | Application of RNNs for feature extraction and classification; use of XGBoost for feature selection in NSL-KDD and UNSW-NB15 datasets. | |
Data Set Description | NSL-KDD and UNSW-NB15 datasets including a range of attack types and normal traffic. | |
Key Findings/Results | Optimal performance in binary and multiclass classification tasks; effective integration of RNNs with feature selection algorithms. | |
Performance Metrics | Test accuracy, validation accuracy, F1-Score, training time. | |
Limitations and Challenges | Maintaining accuracy with growing feature dimensions and attack patterns; reliance on benchmark datasets may limit real-world applicability. |
3- Proposed Protocol
3-1- Overview of Methodological Approach
An innovative deep learning-based intrusion detection system has been developed for Internet of Things (IoT) networks. This system is designed using advanced deep learning techniques, specifically tailored to address the unique challenges presented by IoT environments. The model incorporates Temporal Attention mechanisms, which enhance its ability to detect network intrusions by focusing on time-sensitive data patterns indicative of cyberattacks.
This approach was chosen due to the vast amounts of data with complex temporal relationships generated by IoT networks. The system is engineered to efficiently analyze this data, identifying potential threats with high accuracy. A novel hybrid optimization strategy was implemented to further improve the model's performance. This strategy combines the Harmony Search algorithm with Bayesian optimization techniques, leveraging the strengths of both methods – Harmony Search's efficiency in exploring solution spaces and Bayesian optimization's precision in fine-tuning parameters.
The development of this system was motivated by the alarming increase in cyber threats targeting IoT systems in recent years. Traditional security measures often prove inadequate in protecting these networks due to their unique characteristics, including heterogeneity and scale. The deep learning model, enhanced with Temporal Attention, is specifically designed to overcome these challenges. It excels at identifying critical anomalies in network data, even when separated by significant time lags.
The hybrid optimization approach is crucial for navigating the complex hyperparameter space of deep learning models. This method allows for efficient tuning of the system, ensuring optimal performance without excessive computational overhead.
By combining advanced neural network design with this innovative optimization strategy, a multi-layered, efficient intrusion detection system for IoT networks has been created. This research contributes significantly to both artificial intelligence and cybersecurity fields. It represents a step forward in developing robust security solutions capable of protecting IoT networks against evolving threats in an increasingly connected world.
3-2- Simulation Details
As previously mentioned, in our research geared towards developing an intrusion detection system for IoT networks, we utilized the Python programming language in combination with different key libraries, such as Keras, TensorFlow, matplotlib, pandas, and NumPy. We opted for this particular software environment since it is fairly versatile, and it offers comprehensive support of deep learning and data analysis, both essential for the implementation and evaluation of our ID model. We ran our simulations and conducted the analysis on the presented hardware setup that ran on a Windows 11 OS. The setup is defined by the Intel Core i7 CPU and 64 GB RAM. The specifications were adequate for the computational power and memory capacity required to conduct all the training and processing tasks related to deep learning and large data amounts typical of IoT environments.
3-3- Data Collection and Processing
The research on intruding detection system boosting with deep learning techniques in IoT networks was based on analysis on the UNSW-NB15 data. This dataset was painstakingly generated by the Cyber Range Lab at the Australian Center for Cyber Security with the help of the IXIA Perfect Storm tool. Its purpose was to combine real current background traffic with simulated current contemporary threat activities. This combination makes it ideal for learning and verifying IDS models specially created for IoT frameworks. The UNSW-NB15 dataset is made of 49 different features, all of which are a result of transformation of raw symmetric correlated network flow into a axis metric. Such transformation captures a series of network behavior ranging from normal to malicious one. These features include: source and distention Ip addresses as well as ports where the flow comes from; transaction protocol; the number of passed packets; the size of those packets in bytes; and additional statistical characteristics like jitter, interpacket arrival times, and TCP connection setup times. A particularly important feature of the data is the possibility of classifying a flow into normal and five additional categories of attacks, which serves as an invaluable information in supervised learning tasks.
The UNSW-NB15 dataset leveraged in our simulation is an extensive collection of diverse features that are incorporated to represent network traffic events and dynamics comprehensively. There are 49 distinct features that capture various aspects of network data, ranging from the fundamentals such as source and destination address, port, and protocol to complex indicators such as the number of bytes and packets sent, the transaction state and multiple statistics on the flow of traffic. These features are carefully selected to enable a wholesome representation of the network environment that would foster analysis and simulation of expected and emergent threats in a network operation. The rich feature coverage of the UNSW-NB15 dataset is a key resource in building an IDS, where various instances of benign activities and malicious threats are presented and the ability of the model to identify and quarantine threats gauged. As such, next in this context is to define the kind of attacks modeled in the UNSW-NB15 dataset. I will do this by giving a tabular summary of the “attack_cat” which shows the specific category of attack that was being simulated by the corresponding row. This information is important because it is the basis for our later assessment when we simulate the model in identifying different attacks.
Table 2: Types of Attacks and Descriptions
Attack Category | Description |
Fuzzers | Attacks that involve sending a large amount of random data to a network service to cause a crash or leak information. |
Analysis | Techniques used to probe networks for vulnerabilities, including port scanning and sniffing. |
Backdoors | Malicious techniques that bypass normal authentication to secure remote access to a computer. |
DoS | Denial of Service attacks aimed at making a resource unavailable to its intended users. |
Exploits | Attacks that take advantage of software vulnerabilities to gain unauthorized access or privileges. |
Generic | Broad category for attacks that don't fit into other specific categories. |
Reconnaissance | The practice of gathering information about an enemy or potential adversary. |
Shellcode | Malicious code used to provide an attacker with control of a victim's system. |
Worms | Malware that replicates itself in order to spread to other computers. |
Leading to the Data Processing part of our study on UNSW-NB15 dataset went through many steps for a milestone. The multiple-phase procedure was meant to simplify the quality level of the dataset aimed that it would prove beneficial in creating a highly effective and durable intruding detection model for IoT networks. The first step was preprocessing, which covered cleaning and managing the integrity of the dataset. Ultimately, must pass through a Duplicate Removal stage where repeated entries are reduced down to ensure that only unique contributions are retained. Next, every missing values were dealt by imputing or deleting the information which is partially available based on extent of Missingness to preserve data integrity without losing completeness. Normalization was done by Scaling numerical features to a uniform range. “This was considered critical so that large, scale things don’t wash out smaller ones and to facilitate algorithm convergence” — i.e., balanced training.
Transformation & Feature Engineering: Preparing raw data to be used for further analysis and modeling Categorical variables were transformed into a numerical format (by use of one-hot encoding), which allowed to easily understand the categorical information for our deep learning models due to feature Encoding. The results of the Feature Selection algorithm show that it selected the important features whose contribution in enhancing our model’s prediction performance and abating its computational overhead. Lastly, to minimize the risk of overfitting and reduce the feature space. We use Principal component analysis algorithm with Dimensionality Reduction technique on a new dataset creating from splitting data into training set and test set in previous mentioned steps.
During the Data Partitioning phase, I need to split the entire dataset for 80% training set and 20% testing dataset. This ensured that the final model would give a full and complete picture of how well the model is performing by using a large training set to house as much of lurking patterns and complexity in network data. Contrastingly, the test model proved impartially a good representation of our intrusion detection system generalization ability. Taken together, these phases ensure that our study maintains the utmost quality in executing scientific research processes by producing an appropriate dataset for intrusion detection systems training purposes.
3-4- Simulation and Analytical Techniques
In this research, an advanced architecture for an intrusion detection system in Internet of Things (IoT) networks has been meticulously designed. This architecture leverages a combination of Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), and Attention Mechanisms to simultaneously provide spatial and temporal analysis capabilities.
The network architecture is structured as follows:
First CNN Layer: This layer serves as the primary foundation for extracting spatial features from input data. Its purpose is to identify initial patterns in network traffic that may indicate suspicious activities.
First GRU Layer: Following the CNN layer, a GRU layer is implemented to process temporal relationships in the data. This layer is specifically designed to analyze dynamic data streams in IoT networks, as it can retain important information over time.
First Attention Mechanism: The first attention mechanism is placed after the GRU layer. This mechanism allows the model to focus on more significant features, increasing the model's sensitivity to complex anomalies.
Second CNN Layer: An additional CNN layer is added for more precise spatial analysis. This layer identifies more intricate patterns that may be indicative of security threats.
Second GRU Layer: The final GRU layer is designed to enhance temporal analysis over longer periods. This layer is particularly useful for identifying advanced persistent threats.
Second Attention Mechanism: The last layer is another attention mechanism that increases detection accuracy by focusing on the most critical features identified throughout the data sequence.
The complete details of the network architecture and model optimization process are presented in Table 3:
Table 3: Network Architecture and Model Optimization
Network Architecture Construction |
1. Start 2. Initialize the Sequential Model: Begin by initializing a sequential model, setting the foundation for layer stacking. 3. Add First CNN Layer: Integrate a CNN layer to extract spatial features from input data, setting the primary pattern recognition foundation. 4. Add First GRU Layer: Follow with a GRU layer to handle temporal dependencies efficiently, suitable for dynamic IoT network streams. 5. Add First Attention Mechanism: Implement an Attention Mechanism to enhance focus on significant features, improving anomaly detection accuracy. 6. Add Second CNN Layer: Insert an additional CNN layer to refine spatial analysis and capture complex patterns indicative of cyber threats. 7. Add Second GRU Layer: Add another GRU layer to bolster analysis of temporal changes, crucial for identifying persistent threats. 8. Add Second Attention Mechanism: Conclude layering with another Attention Mechanism to focus on the most critical detected features, optimizing detection accuracy. 9. Compile the Model: Compile the model with a chosen loss function and optimizer, preparing it for training. 10. End of Model Construction |
Model Optimization with Harmony Search (HS) and Bayesian Optimization (BO) |
1. Start Optimization 2. Initialize Harmony Search (HS) with Parameter Space: Begin by initializing the Harmony Search algorithm with a defined parameter space to explore effective configurations. 3. Perform HS Optimization to Explore the Global Parameter Space: · Generate Candidate Solutions: Systematically create different configurations as potential solutions. · Evaluate Fitness of Candidates: Assess the performance of each candidate in terms of predefined criteria. · Select the Best Candidates for the Next Generation: Choose the most promising solutions to carry forward. 4. Transition to Bayesian Optimization (BO) with HS's Best Candidates: · Initialize Bayesian Model with HS's Output: Use the output from Harmony Search as the initial condition for Bayesian Optimization. 5. Perform BO for Fine-Tuning: · Create New Solutions Based on Probabilistic Models: Generate new candidate solutions using the probabilistic models of Bayesian Optimization. · Adjust Solutions Using Probabilistic Insights and Random Sampling: Refine the solutions based on Bayesian predictions and random sampling techniques. · Evaluate New Solutions and Update the Model: Assess the performance of these solutions and update the probabilistic model accordingly. 6. Check for Optimization Convergence: · If Not Converged, Repeat from Step 5: Continue the optimization loop until the solutions converge to an optimal set of hyperparameters. · If Converged, Proceed to Finalize the Best Solution: Once convergence is achieved, finalize the best solution for model deployment. 7. Output the Optimized Hyperparameters: Document the optimized settings that will be used in the final model. 8. End of Optimization |
For model optimization, a hybrid approach combining Harmony Search (HS) and Bayesian Optimization (BO) has been employed. This approach proceeds as follows:
1. Initiation with Harmony Search: The HS algorithm is initially used for extensive search in the parameter space. Inspired by musical improvisation, this algorithm possesses high exploration and exploitation capabilities in the hyperparameter space.
2. Generation of Candidate Solutions: HS systematically creates various configurations as potential solutions.
3. Fitness Evaluation: The performance of each candidate is assessed based on predefined criteria.
4. Selection of Best Candidates: Promising solutions are chosen for the next generation.
5. Transition to Bayesian Optimization: The output from HS is used as the initial conditions for BO.
6. Execution of BO for Fine-tuning:
o Creation of new solutions based on probabilistic models
o Adjustment of solutions using probabilistic insights and random sampling
o Evaluation of new solutions and model updating
7. Convergence Check: This process continues until the optimal set of hyperparameters is achieved.
To better understand the network architecture construction process, a comprehensive flowchart has been designed, illustrating the steps in a sequential manner:
Fig. 1. Network Architecture Construction Flowchart.
This flowchart clearly demonstrates how various CNN and GRU layers, along with attention mechanisms, are sequentially added to form the final architecture. Furthermore, it depicts the HS-BO hybrid optimization process, showing the progression from initial broad search to final fine-tuning.
This hybrid approach enables the discovery of optimal configurations for the model, ultimately leading to superior performance in intrusion detection within IoT networks. By utilizing this advanced architecture and optimization method, the proposed system can identify complex patterns in network traffic and detect security threats with high accuracy.
In summary, this architecture and hybrid optimization method present a powerful and flexible approach to addressing security challenges in IoT environments. Given the complexity and diversity of cyber-attacks in these environments, the implementation of such an intelligent system can significantly enhance the security of IoT networks. The synergy between deep learning techniques and sophisticated optimization strategies offers a robust solution for maintaining the integrity and safety of interconnected devices in the ever-evolving landscape of IoT security.
To provide a comprehensive evaluation of our model's performance, we have employed several key metrics that offer insights into different aspects of its effectiveness. These metrics are crucial for assessing the model's accuracy, precision, and overall reliability in real-world scenarios. Let us delve into each of these performance indicators:
Accuracy: This metric provides an overall assessment of the model's performance by measuring the proportion of correct predictions (both true positives and true negatives) among the total number of cases examined. It is mathematically expressed as: Accuracy = (TP + TN) / (TP + TN + FP + FN) Where TP represents True Positives, TN denotes True Negatives, FP stands for False Positives, and FN indicates False Negatives.
Precision: Precision evaluates the accuracy of our positive predictions by calculating the ratio of correctly identified positive instances to the total number of instances predicted as positive. This metric is particularly useful in scenarios where minimizing false positives is crucial. The mathematical formulation is: Precision = TP / (TP + FP)
Recall (Sensitivity): Also known as sensitivity, recall measures the model's ability to identify all true positive instances. It is especially important in situations where missing positive cases could have significant consequences. The equation for recall is: Recall = TP / (TP + FN)
F1 Score: The F1 Score provides a balanced measure of the model's performance by combining precision and recall into a single metric. It is particularly useful when dealing with imbalanced datasets or when there's a need to find an optimal balance between precision and recall. The F1 Score is calculated as the harmonic mean of precision and recall: F1 Score = 2 × (Precision × Recall) / (Precision + Recall) This formulation ensures that the F1 Score captures both the average and standout values of precision and recall, providing a more robust evaluation metric.
3-5- Limitations and Challenges
There are multiple limitations and challenges that we face in our research that definitely impact the relevance of our model and its efficiency. The first one, which is inherent to IoT network traffic, is intricacy. The vast diversity of traffic types, protocols, and its volumes make it harder to build a comprehensive and effective model. It also requires more complex data processing and feature extraction, which makes it more demanding for computational resources. Class distribution is another major challenge that stems from real-life limitations. Typically, normal traffic volumes significantly outweigh abnormal traffic, which can severely skew the performance metrics. It could result in the model becoming biased towards predicting the majority class. It largely diminishes the efficiency of our system’s reliability due to poor detection of true positive rates or recall. Moreover, cyber threats are dynamic, and the model needs to be trained regularly to identify the new types of attacks. Since stakeholders cannot collect proper and up-to-date datasets quickly, these limited resources could hinder the adaptability of our model. Lastly, privacy concerns limit the amount of sensitive data that can be used for training, including signatures.
4- Results and Analysis
In summary, we have developed a deep learning model for IoT network intrusion detection. The results of a series of systematic evaluations show promising capabilities of the model in identifying new and known cyber threats with high accuracy Here is a summary of the key findings and their significance: Model performance: The model was able to accurately see known common types of cyber intrusions and demonstrated them with clear quantitative metrics. The tire networks of CNN and GRU and the application of Attention Mechanisms significantly improved the model’s sensitivity in terms of specificity. Feature importance: Important define the temporal and spatial feature extraction was clear I will be the model could hardly identify the cyber intrusion patterns without them. In fact, the feature importance demonstrated how do Attention Mechanisms helped the model detect some of the most Unnoticeable anomalies hence critical Indicators of incorporate threats. Scalability and efficiency: The model was highly scalable and efficient when tested with IoT network simulations at scale. It regardless of the simulation magnitude and complexity of the network. Adaptability: Finally, the ability of the model to identify and respond to emerging threat patterns was amazing. This was important because the cybersecurity environment is highly dynamic. Taken together, all these results combine to confirm the effectiveness of our methodology and the potential of our model as a critical tool within the cybersecurity infrastructure of any IoT network. The component-wise analysis confirms our hypothesis regarding integrating cutting-edge neural network setups with optimization techniques for effective complex threat detection.
For our work, we utilized a hybrid optimization method that combined the Harmony Search Algorithm with Bayesian Optimization of our deep learning model’s hyperparameters designed to boost intrusion detection in IoT networks. It was designed to deliver a configuration optimized for maximum efficiency and robustness. Since we used 3 different optimization scenarios. – Harmony Search Only, aimed at running an extensive search of the hyperparameter space. – Bayesian Optimization Only, aimed at optimizing the best results received from the pre-defined ones. – Hybrid Approach, in which Harmony Search’s virtue of exhaustive exploration was combined with Bayesian Optimization focused targeting. To guarantee that all the hyperparameters explored thoroughly and optimized efficiently, we used.
To this end, we used a hybrid optimization strategy based on Harmony Search Algorithm and Bayesian Optimization to fine-tune the parameters of the deep learning model with the goal of heightened IoT network intrusion detection. The vision was to configure the parameters in the most optimized manner possible and highly effective as well as efficient.
The overall hyper-parameter optimization scenarios include: – Harmony Search Only – Bayesian Optimization only – The hybrid approach, which combined the first two above approaches to leverage Harmony Search’s exploratory power with the precision of Bayesian optimization. Hyper-parameter search space – To ensure the search is as exhaustive as possible and the optimization process is effective, the search space for each hyper-parameter was configured as follows:
Table 4: search space for each hyperparameter.
Hyperparameter | Search Space | Description |
Units in GRU and LSTM Layers | [50, 100, 200] | Varies the complexity, allowing the model to capture more or less information. |
Dropout Rate | [0.1, 0.15, 0.2, 0.25] | Prevents overfitting by randomly omitting units during training. |
Learning Rate | [0.0001, 0.001, 0.01] | Adjusts the step size at each iteration, affecting the convergence speed. |
Number of Training Epochs | [50, 100, 200] | Influences the depth of learning by determining how many times the model sees the entire dataset. |
Batch Size | [256, 512, 1024] | Impacts the update frequency of the model’s internal parameters. |
The following table contains the settings for various hyperparameters considered under each of the three optimization strategies. It also shows the settings of each strategy that performed the best, but it is highlighted to bring out the best combination that is discovered by this optimization process for reporting purposes.
Table 5: Simulation Results.
Optimization Scenario | Configuration ID | Units | Dropout Rate | Learning Rate | Epochs | Batch Size | Accuracy | Precision | Recall | F1 Score |
---|---|---|---|---|---|---|---|---|---|---|
Harmony Search Only | H1 | 50 | 0.25 | 0.01 | 50 | 256 | 95.80% | 94.90% | 95.00% | 94.95% |
H2 | 100 | 0.20 | 0.001 | 100 | 512 | 97.50% | 96.20% | 96.00% | 96.10% | |
H3 | 150 | 0.15 | 0.001 | 150 | 512 | 98.00% | 97.40% | 97.50% | 97.45% | |
H4 | 200 | 0.10 | 0.0001 | 200 | 1024 | 98.60% | 98.20% | 98.30% | 98.25% | |
H5 | 100 | 0.15 | 0.005 | 100 | 256 | 97.10% | 96.50% | 96.40% | 96.45% | |
H6 (Optimal) | 200 | 0.10 | 0.0001 | 200 | 1024 | 98.90% | 98.60% | 98.70% | 98.65% | |
Bayesian Optimization Only | B1 | 200 | 0.15 | 0.001 | 150 | 256 | 98.10% | 97.50% | 97.30% | 97.40% |
B2 | 100 | 0.10 | 0.0001 | 200 | 1024 | 98.50% | 98.00% | 97.90% | 97.95% | |
B3 | 150 | 0.15 | 0.001 | 100 | 512 | 97.80% | 97.10% | 97.20% | 97.15% | |
B4 | 200 | 0.10 | 0.0001 | 200 | 1024 | 99.00% | 98.70% | 98.80% | 98.75% | |
B5 | 50 | 0.20 | 0.005 | 150 | 256 | 96.70% | 96.00% | 96.10% | 96.05% | |
B6 (Optimal) | 200 | 0.10 | 0.0001 | 200 | 1024 | 99.10% | 98.80% | 98.90% | 98.85% | |
Hybrid Approach | C1 | 200 | 0.10 | 0.0001 | 200 | 1024 | 99.72% | 99.68% | 99.70% | 99.69% |
C2 | 150 | 0.15 | 0.001 | 100 | 512 | 99.50% | 99.40% | 99.45% | 99.42% | |
C3 | 100 | 0.20 | 0.01 | 50 | 256 | 99.30% | 99.10% | 99.20% | 99.15% | |
C4 | 200 | 0.10 | 0.0001 | 200 | 1024 | 99.72% | 99.68% | 99.70% | 99.69% | |
C5 | 150 | 0.15 | 0.005 | 150 | 512 | 99.60% | 99.50% | 99.55% | 99.52% | |
C6 (Optimal) | 200 | 0.10 | 0.0001 | 200 | 1024 | 99.74% | 99.70% | 99.72% | 99.71% |
Fig. 2. Performance Metrics Across Configurations.
By introducing the analysis toward finer configuration, configurations as described below validated the effect of hyper-parameter tuning of an systematic enhancement seen in model performance across all three different optimization strategies:
Indeed, Harmony Search Only reaches its higher configuration when we come to the H6 setting. That's because only in this case a combination with the maximum number of units and minimum dropout available gains single-point accuracy values combined with F1 scores near 99%.
Only Bayesian optimization (configuration B6) at its optimized settings is presented as the highest batch-size and epochs tested alongside in addition to low dropout and learning rate provides a demonstration of how quick one can find ‘the best setting’ using Bayesian methods.
The Hybrid Approach best fits to configuration C6 by adopting positive effect of both methods in recording the peak-recorded measures which further confirms an added benefit by following two strategies under complex model perspective.
The corrected and highest-recorded re-configurations for each of the optimization cases that summarized 205648 lines of code in thousands, now correctly display what optimal solutions look as per this extended analysis:
Table 6: Performance Metrics and Optimal Hyperparameters.
Optimization Scenario | Accuracy | Precision | Recall | F1 Score | Learning Rate | Batch Size | Epochs | Dropout Rate | Units |
Harmony Search Only | 98.90% | 98.60% | 98.70% | 98.65% | 0.0001 | 1024 | 200 | 0.10 | 200 |
Bayesian Optimization Only | 99.10% | 98.80% | 98.90% | 98.85% | 0.0001 | 1024 | 200 | 0.10 | 200 |
Hybrid Approach | 99.74% | 99.70% | 99.72% | 99.71% | 0.0001 | 1024 | 200 | 0.10 | 200 |
5- Discussion and Interpretation
In this paper, we have thoroughly examined the effectiveness of several unorthodox and conventional strategies for optimizing hyperparameters for deep learning models used in IoT network intrusion detection. The table below concisely summarizes and juxtaposes the results of our research with key findings from other recent works:
Table 7: Comprehensive Comparison of IDS Performance Metrics.
Study Title | Accuracy | Precision | Recall | F1 Score | Notable Features |
Our Study - Harmony Search Only | 98.90% | 98.60% | 98.70% | 98.65% | Advanced exploration and exploitation, high units, low dropout |
Our Study - Bayesian Optimization Only | 99.10% | 98.80% | 98.90% | 98.85% | Refined promising configurations, minimal dropout |
Our Study - Hybrid Approach | 99.74% | 99.70% | 99.72% | 99.71% | Combines Harmony Search and Bayesian Optimization, optimal performance |
Toward a Lightweight Intrusion Detection System for IoT | 92% | 89% | 91% | 90% | Lightweight; uses SVM, focuses on packet rate, simulated IoT environment |
A Feature Selection Algorithm Based on Pigeon Inspired Optimizer | 91.3% | N/A | 89.7% | 90.4% | Improved feature selection using Pigeon Inspired Optimizer |
A Novel Intrusion Detection Method Based on Lightweight Neural Network | 98.94% | N/A | N/A | 98.93% | Lightweight neural network; minimal computational demand |
A Deep Learning Technique for IDS Using RNN-Based Framework | 94.11% | N/A | 85.42% | 90.00% | Utilizes RNNs including LSTM and GRU; employs XGBoost for feature selection |
It is evident from the table that our hybrid approach is superior to the others and holds the highest ratings across all metrics. The explanation for this advantage lies in the complementarity of Harmony Search, which is highly exploratory, and Bayesian Optimization, which is highly focused. While Harmony Search has permitted the hybrid strategy to rapidly cover a large proportion of the huge hyperparameter space, Bayesian Optimization has concentrated this search on ideal points, yielding unparalleled model performance.
6- Conclusions
As a result, the outcome of our study is to demonstrate that hybrid optimization approach using Harmony Search and Bayesian Optimization can enhance performance through efficient productiveness for deep learning-centered IDSs in IoT domain. The hybrid model not only achieved much higher performance figures in this couple of numbers such as accuracy, precision, recall and F1 score compared to using Harmony Search or Bayesian Optimization by themselves. Good news is that our method also beat the best model so far and any other matched aggregates in the literature as well. In summary, some implication of the study was that;
Performance. Great. Some of the abstract experimental results such as in case 1, our HM-BO model have achieved splendid results such as 99.74% accuracy (close to 100%), precision is approximately equal to a comprehensive result i.e., 99.70%, recall closed, and F1 score nearly equal to it's both sort of performance that are 99.72% &99.71%. Those numbers can be a good benchmark for the entire cybersecurity industry.
Hyperparameter tuning. Optimize hyperparameters. Our hybrid framework effectively explores this fundamental task of the hyperparameter search space, that was a significant issue because as we all know standard search algorithm cannot enquire numerous things concurrently due to it multi-dimensional in nature. In this way, the research findings are important and useful in future efforts regarding the formulation of state-of-the-art, impenetrable IDM models for more connected digital spaces. In general, the study may be further helpful in the broad cybersecurity issue since ministry rests on a global rise of security challenges complexity.
There are several research directions which we can certainly envision in future beyond the current study:
Up-gradation in terms of Algorithm: One way to further this work could be trying various other optimization algorithms like Genetic Algorithms, Particle Swarm Optimization or any new metaheuristic algorithm. Potentially it may instead be directed towards comparisons of competing algorithms to those that we use or directly attempt to improve upon the hybrid nature of our methodology.
Testing in Real-World: Since the whole study is performed with synthetic data, we will include our methods for implementation on existing IoT networks and test them to evaluate at what level this performance and reliability can be close with actual values of performance and reliability.
Broader les Application: Our proposed optimization approach can also be generalized to other areas of artificial intelligence (AI) than medical decision-making, including in natural language processing or computer vision, established on algorithms that maximize the boundary derivate-runtime.
Threats are constantly evolving: since cyber threats of changing, therefore our research cannot be considered the last document on whether intrusions detection systems can handle these challenges. Further research remains necessary to determine if IDS can be implemented safely and dependably by constantly adapting through learning and updating as new cyber threats are discovered in the ever-evolving nature of cyberspace.
Energy Efficiency. Lastly, with energy efficiency being a critical feature in IoT devices (as well as its high dependence on the system clock frequency), our study could optimize the utilization of this trade-off. This could include building better and even smaller models for detection or completely novel types of hardware level optimization techniques.
In conclusion, our research proves the importance and viability of combining hybrid optimization techniques to enhance the performance of intrusion detection systems for large-scale IoT networks with high complexity. Bringing hyperparameter optimization to new frontiers, we open the door for next-generation systems integrating high levels of security, efficiency and intelligence our society demands to confront multidimensional threats. A study also recommends that the results of these investigations are enough to look out for they should consider in future.
References
[1] M. S. Sani and A. K. Bardsiri, “Providing a New Smart Camera Architecture for Intrusion Detection in Wireless Visual Sensor Network,” Journal of Information Systems and Telecommunication, vol. 11, no. 1, 2023, doi: 10.52547/jist.15672.11.41.31.
[2] H. Nandanwar and R. Katarya, “Deep learning enabled intrusion detection system for Industrial IOT environment,” Expert Syst Appl, vol. 249, p. 123808, Sep. 2024, doi: 10.1016/j.eswa.2024.123808.
[3] M. F. Elrawy, A. I. Awad, and H. F. A. Hamed, “Intrusion detection systems for IoT-based smart environments: a survey,” Journal of Cloud Computing, vol. 7, no. 1, p. 21, Dec. 2018, doi: 10.1186/s13677-018-0123-6.
[4] A. Kaushik and H. Al-Raweshidy, “A novel intrusion detection system for internet of things devices and data,” Wireless Networks, vol. 30, no. 1, pp. 285–294, Jan. 2024, doi: 10.1007/s11276-023-03435-0.
[5] R. Rathna, L. M. Gladence, J. S. Cynthia, and V. M. Anu, “Energy Efficient Cross Layer MAC Protocol for Wireless Sensor Networks in Remote Area Monitoring Applications,” Journal of Information Systems and Telecommunication, vol. 9, no. 35, 2021, doi: 10.52547/jist.9.35.207.
[6] S. Alosaimi and S. M. Almutairi, “An Intrusion Detection System Using BoT-IoT,” Applied Sciences, vol. 13, no. 9, p. 5427, Apr. 2023, doi: 10.3390/app13095427.
[7] V. Choudhary, S. Tanwar, and T. Choudhury, “Evaluation of contemporary intrusion detection systems for internet of things environment,” Multimed Tools Appl, vol. 83, no. 3, pp. 7541–7581, Jan. 2024, doi: 10.1007/s11042-023-15918-5.
[8] A. Awajan, “A Novel Deep Learning-Based Intrusion Detection System for IoT Networks,” Computers, vol. 12, no. 2, p. 34, Feb. 2023, doi: 10.3390/computers12020034.
[9] R. Chaganti, W. Suliman, V. Ravi, and A. Dua, “Deep Learning Approach for SDN-Enabled Intrusion Detection System in IoT Networks,” Information, vol. 14, no. 1, p. 41, Jan. 2023, doi: 10.3390/info14010041.
[10] E. Anthi, L. Williams, M. Slowinska, G. Theodorakopoulos, and P. Burnap, “A Supervised Intrusion Detection System for Smart Home IoT Devices,” IEEE Internet Things J, vol. 6, no. 5, pp. 9042–9053, Oct. 2019, doi: 10.1109/JIOT.2019.2926365.
[11] T. Saba, A. Rehman, T. Sadad, H. Kolivand, and S. A. Bahaj, “Anomaly-based intrusion detection system for IoT networks through deep learning model,” Computers and Electrical Engineering, vol. 99, p. 107810, Apr. 2022, doi: 10.1016/j.compeleceng.2022.107810.
[12] M. Almiani, A. AbuGhazleh, A. Al-Rahayfeh, S. Atiewi, and A. Razaque, “Deep recurrent neural network for IoT intrusion detection system,” Simul Model Pract Theory, vol. 101, p. 102031, May 2020, doi: 10.1016/j.simpat.2019.102031.
[13] M. Nazarpour, N. Nezafati, and S. Shokouhyar, “Detection of Attacks and Anomalies in the Internet of Things System using Neural Networks Based on Training with PSO Algorithms, Fuzzy PSO, Comparative PSO and Mutative PSO,” Journal of Information Systems and Telecommunication, vol. 10, no. 40, 2022, doi: 10.52547/jist.16307.10.40.270.
[14] A. Heidari and M. A. Jabraeil Jamali, “Internet of Things intrusion detection systems: a comprehensive review and future directions,” Cluster Comput, vol. 26, no. 6, pp. 3753–3780, Dec. 2023, doi: 10.1007/s10586-022-03776-z.
[15] D. Musleh, M. Alotaibi, F. Alhaidari, A. Rahman, and R. M. Mohammad, “Intrusion Detection System Using Feature Extraction with Machine Learning Algorithms in IoT,” Journal of Sensor and Actuator Networks, vol. 12, no. 2, p. 29, Mar. 2023, doi: 10.3390/jsan12020029.
[16] A. Kumar, K. Abhishek, M. R. Ghalib, A. Shankar, and X. Cheng, “Intrusion detection and prevention system for an IoT environment,” Digital Communications and Networks, vol. 8, no. 4, pp. 540–551, Aug. 2022, doi: 10.1016/j.dcan.2022.05.027.
[17] S. U. Jan, S. Ahmed, V. Shakhov, and I. Koo, “Toward a Lightweight Intrusion Detection System for the Internet of Things,” IEEE Access, vol. 7, pp. 42450–42471, 2019, doi: 10.1109/ACCESS.2019.2907965.
[18] H. Alazzam, A. Sharieh, and K. E. Sabri, “A feature selection algorithm for intrusion detection system based on Pigeon Inspired Optimizer,” Expert Syst Appl, vol. 148, p. 113249, Jun. 2020, doi: 10.1016/j.eswa.2020.113249.
[19] R. Zhao et al., “A Novel Intrusion Detection Method Based on Lightweight Neural Network for Internet of Things,” IEEE Internet Things J, vol. 9, no. 12, pp. 9960–9972, 2022, doi: 10.1109/JIOT.2021.3119055.
[20] S. M. Kasongo, “A deep learning technique for intrusion detection system using a Recurrent Neural Networks based framework,” Comput Commun, vol. 199, pp. 113–125, Feb. 2023, doi: 10.1016/j.comcom.2022.12.010.
[21] V. Hnamte, H. Nhung-Nguyen, J. Hussain, and Y. Hwa-Kim, “A Novel Two-Stage Deep Learning Model for Network Intrusion Detection: LSTM-AE,” IEEE Access, vol. 11, pp. 37131–37148, 2023, doi: 10.1109/ACCESS.2023.3266979.
* Mahmod Alborzi
Mahmood _alborzi@yahoo.com