dc.description.abstract |
Network Intrusion Detection System (NIDS) is an essential tool for network administrators to de- tect security breaches. Currently, the existing anomaly-based intrusion detection methods rely on traditional machine learning models such as Support Vector Machine and Random Forest. However, current machine learning-based NIDS applications often do not perform well due to the diversity of attacks and imbalanced datasets having less data pertinent to attack events. Therefore, it is impor- tant to synthesize data in a probabilistic manner that is similar to original attack event-related data. Accordingly, in this paper, we propose a new paradigm of the synthesizing task based on Variational Laplace AutoEncoder (VLAE) and Deep Neural Network. We exploit the paradigm to develop a new intrusion detection method. Here, we go beyond the existing VLAE model through incorporating class labels as an input in the VLAE model. Hence, the latent representation of samples of different class labels separate in the latent space and we can generate attack samples based on the class labels. We term the enhanced model as Conditional Variational Laplace Autoencoder (CVLAE). We further extend our proposed model by adding attention mechanism to better reconstruct features, named Con- ditional Variational Laplace Attention AutoEncoder (CVLAAE). We employ CVLAE and CVLAAE to learn latent variable representations of network data features and to synthesize data in a proba- bilistic manner. To do so, we use a Deep Neural Network (DNN) classifier, which is trained on the original and synthesized data. The DNN classifier is used to classify the attack samples. We evaluate our model on different benchmark datasets namely NSL-KDD and KDD CUPP 99 datasets. Here, we demonstrate the efficacy of our proposed method through showing that our method achieves higher performance in the cases of minority attacks compared to other existing methods in our experimenta- tion. The experimental results further demonstrate that adding the attention mechanism in CVLAE resulting in CVLAAE has the best overall performance in terms of precision, recall, specificity, and F1 score. |
en_US |