Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 70 new columns ({'Fwd_IAT_Min', 'Flow_IAT_Std', 'Idle_Std', 'Flow_Byts/s', 'Pkt_Len_Min', 'Pkt_Len_Max', 'Fwd_IAT_Std', 'Bwd_Pkt_Len_Max', 'URG_Flag_Cnt', 'Bwd_Pkts/b_Avg', 'Init_Fwd_Win_Byts', 'Fwd_URG_Flags', 'Fwd_Seg_Size_Min', 'Bwd_IAT_Tot', 'Protocol', 'Flow_IAT_Min', 'Idle_Min', 'Bwd_IAT_Min', 'Fwd_IAT_Tot', 'Dst_Port', 'Fwd_Pkts/b_Avg', 'Active_Std', 'Pkt_Len_Mean', 'Bwd_Pkt_Len_Std', 'Pkt_Len_Std', 'Bwd_Byts/b_Avg', 'Bwd_IAT_Std', 'Dst_IP', 'Flow_Duration', 'Fwd_Pkts/s', 'Bwd_Pkt_Len_Mean', 'Fwd_Header_Len', 'Idle_Mean', 'TotLen_Fwd_Pkts', 'Bwd_URG_Flags', 'RST_Flag_Cnt', 'Src_Port', 'ACK_Flag_Cnt', 'SYN_Flag_Cnt', 'Fwd_Act_Data_Pkts', 'Down/Up_Ratio', 'Fwd_PSH_Flags', 'Fwd_Pkt_Len_Std', 'Flow_IAT_Max', 'Flow_IAT_Mean', 'Bwd_Pkts/s', 'Bwd_IAT_Mean', 'Fwd_Pkt_Len_Min', 'Fwd_Byts/b_Avg', 'Bwd_Blk_Rate_Avg', 'Bwd_Header_Len', 'Tot_Fwd_Pkts', 'Bwd_PSH_Flags', 'Bwd_Pkt_Len_Min', 'Fwd_Pkt_Len_Mean', 'Src_IP', 'Fwd_Pkt_Len_Max', 'ECE_Flag_Cnt', 'FIN_Flag_Cnt', 'Tot_Bwd_Pkts', 'Active_Min', 'Init_Bwd_Win_Byts', 'Active_Mean', 'Flow_Pkts/s', 'Flow_ID', 'Fwd_Blk_Rate_Avg', 'Fwd_IAT_Mean', 'TotLen_Bwd_Pkts', 'CWE_Flag_Count', 'Pkt_Len_Var'}) and 3 missing columns ({'DLC', 'Data', 'Arbitration_ID'}).

This happened while the csv dataset builder was generating data using

hf://datasets/Thi-Thu-Huong/resampled_IDS_datasets/resampled_train_IoTID20.csv (at revision 0320877d48042b3ec0b83e09f05e4d12db5ad6a5)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 623, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              Tot_Fwd_Pkts: int64
              Fwd_Pkts/b_Avg: int64
              Pkt_Len_Mean: double
              Flow_IAT_Min: double
              Fwd_PSH_Flags: int64
              Active_Std: double
              Protocol: int64
              Fwd_Pkts/s: double
              Bwd_Byts/b_Avg: int64
              Fwd_Pkt_Len_Min: double
              Fwd_IAT_Std: double
              Idle_Mean: double
              Flow_IAT_Mean: double
              Pkt_Len_Max: double
              Src_IP: double
              Dst_Port: int64
              Bwd_IAT_Min: double
              Flow_ID: double
              TotLen_Fwd_Pkts: double
              Init_Fwd_Win_Byts: int64
              Bwd_Pkt_Len_Min: double
              Fwd_Pkt_Len_Std: double
              Bwd_Pkt_Len_Max: double
              Fwd_Byts/b_Avg: int64
              Active_Mean: double
              Bwd_Pkt_Len_Std: double
              Flow_Duration: int64
              TotLen_Bwd_Pkts: double
              Active_Min: double
              Tot_Bwd_Pkts: int64
              Flow_Byts/s: double
              Fwd_Pkt_Len_Mean: double
              Bwd_Pkts/s: double
              Pkt_Len_Var: double
              Fwd_Pkt_Len_Max: double
              Bwd_URG_Flags: int64
              Fwd_Header_Len: int64
              Bwd_IAT_Std: double
              Dst_IP: double
              Flow_IAT_Std: double
              ECE_Flag_Cnt: int64
              Pkt_Len_Std: double
              Fwd_URG_Flags: int64
              Flow_IAT_Max: double
              CWE_Flag_Count: int64
              ACK_Flag_Cnt: int64
              Flow_Pkts/s: double
              Bwd_Blk_Rate_Avg: int64
              Idle_Std: double
              Bwd_Header_Len: int64
              Init_Bwd_Win_Byts: int64
              FIN_Flag_Cnt: int64
              Bwd_PSH_Flags: int64
              Fwd_Seg_Size_Min: int64
              Bwd_IAT_Tot: double
              Bwd_Pkt_Len_Mean: double
              URG_Flag_Cnt: int64
              Fwd_Blk_Rate_Avg: int64
              Src_Port: int64
              RST_Flag_Cnt: int64
              Fwd_Act_Data_Pkts: int64
              Down/Up_Ratio: double
              Bwd_Pkts/b_Avg: int64
              Fwd_IAT_Min: double
              Fwd_IAT_Mean: double
              SYN_Flag_Cnt: int64
              Timestamp: double
              Idle_Min: double
              Fwd_IAT_Tot: double
              Pkt_Len_Min: double
              Bwd_IAT_Mean: double
              Classes: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 9165
              to
              {'Timestamp': Value(dtype='float64', id=None), 'Arbitration_ID': Value(dtype='float64', id=None), 'DLC': Value(dtype='int64', id=None), 'Data': Value(dtype='float64', id=None), 'Classes': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1438, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 70 new columns ({'Fwd_IAT_Min', 'Flow_IAT_Std', 'Idle_Std', 'Flow_Byts/s', 'Pkt_Len_Min', 'Pkt_Len_Max', 'Fwd_IAT_Std', 'Bwd_Pkt_Len_Max', 'URG_Flag_Cnt', 'Bwd_Pkts/b_Avg', 'Init_Fwd_Win_Byts', 'Fwd_URG_Flags', 'Fwd_Seg_Size_Min', 'Bwd_IAT_Tot', 'Protocol', 'Flow_IAT_Min', 'Idle_Min', 'Bwd_IAT_Min', 'Fwd_IAT_Tot', 'Dst_Port', 'Fwd_Pkts/b_Avg', 'Active_Std', 'Pkt_Len_Mean', 'Bwd_Pkt_Len_Std', 'Pkt_Len_Std', 'Bwd_Byts/b_Avg', 'Bwd_IAT_Std', 'Dst_IP', 'Flow_Duration', 'Fwd_Pkts/s', 'Bwd_Pkt_Len_Mean', 'Fwd_Header_Len', 'Idle_Mean', 'TotLen_Fwd_Pkts', 'Bwd_URG_Flags', 'RST_Flag_Cnt', 'Src_Port', 'ACK_Flag_Cnt', 'SYN_Flag_Cnt', 'Fwd_Act_Data_Pkts', 'Down/Up_Ratio', 'Fwd_PSH_Flags', 'Fwd_Pkt_Len_Std', 'Flow_IAT_Max', 'Flow_IAT_Mean', 'Bwd_Pkts/s', 'Bwd_IAT_Mean', 'Fwd_Pkt_Len_Min', 'Fwd_Byts/b_Avg', 'Bwd_Blk_Rate_Avg', 'Bwd_Header_Len', 'Tot_Fwd_Pkts', 'Bwd_PSH_Flags', 'Bwd_Pkt_Len_Min', 'Fwd_Pkt_Len_Mean', 'Src_IP', 'Fwd_Pkt_Len_Max', 'ECE_Flag_Cnt', 'FIN_Flag_Cnt', 'Tot_Bwd_Pkts', 'Active_Min', 'Init_Bwd_Win_Byts', 'Active_Mean', 'Flow_Pkts/s', 'Flow_ID', 'Fwd_Blk_Rate_Avg', 'Fwd_IAT_Mean', 'TotLen_Bwd_Pkts', 'CWE_Flag_Count', 'Pkt_Len_Var'}) and 3 missing columns ({'DLC', 'Data', 'Arbitration_ID'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/Thi-Thu-Huong/resampled_IDS_datasets/resampled_train_IoTID20.csv (at revision 0320877d48042b3ec0b83e09f05e4d12db5ad6a5)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Timestamp
float64
Arbitration_ID
float64
DLC
int64
Data
float64
Classes
int64
564,420
50
4
238,335
2
371,275
12
8
9,932
2
507,895
41
8
146,403
2
784,621
8
8
156,220
2
367,796
9
8
204,292
1
119,756
0
8
0
0
756,528
23
8
43,774
2
301,321
18
8
227,055
2
97,732
9
8
57,819
2
253,840
14
7
127,830
2
74,425
34
8
47,064
2
368,335
56
8
0
2
134,724
57
8
67,603
2
217,778
50
4
169,231
2
272,312
19
8
57,315
2
309,282
23
8
44,357
2
250,613
12
8
275,407
2
40,489
8
8
218,938
2
328,542
13
8
10,191
2
709,494
27
8
277,738
2
37,804
10
6
206,774
2
13,871
17
8
194,625
2
575,221
13
8
10,169
2
115,269
31
5
42,573
2
146,732
16
8
13,614
2
443,417
71
8
0
2
309,522
4
8
150,174
2
298,386
26
8
144,793
2
704,472
19
8
174,890
2
588,505
9
8
60,129
2
11,834
34
8
48,719
2
664,808
3
8
153,263
2
528,830
34
8
47,533
2
767,058
5
8
106,350
2
764,218
10
6
128,823
2
139,080
15
8
1,962
2
132,697
26
8
238,257
2
625,986
22
8
11
2
470,542
38
8
10,955
2
667,701
3
8
213,944
2
437,216
28
8
81
2
297,912
42
8
10,912
2
3,745
5
8
106,353
2
797,743
50
4
144,613
2
87,090
10
6
84,040
2
67,276
22
8
14
2
754,948
9
8
56,798
2
270,237
6
4
18,387
2
472,343
0
8
0
0
684,690
12
8
275,407
2
179,189
10
6
89,228
2
332,398
69
8
0
2
806,226
9
8
49,527
2
465,103
13
8
10,187
2
21,404
27
8
277,684
2
481,467
0
8
0
0
389,988
14
7
142,359
2
670,152
13
8
10,218
2
722,826
9
8
51,492
2
291,657
11
8
81,724
2
465,324
10
6
208,493
2
586,205
10
6
152,819
2
588,139
31
5
42,849
2
553,959
18
8
97,426
2
491,585
0
8
0
0
736,866
3
8
115,767
2
115,436
12
8
275,410
2
796,084
27
8
277,683
2
420,253
26
8
256,571
2
641,661
14
7
128,462
2
408,776
27
8
277,739
2
543,362
13
8
10,171
2
37,349
7
8
75,852
2
204,119
63
8
11,203
2
78,253
6
4
18,660
2
495,833
0
8
0
0
795,186
4
8
120,088
2
758,286
8
8
72,121
2
483,813
0
8
0
0
580,338
20
8
9,936
2
120,770
0
8
0
0
193,501
34
8
47,344
2
707,219
15
8
5,825
2
444,697
16
8
16,925
2
432,845
9
8
63,331
2
4,797
50
4
104,376
2
472,678
11
8
205,429
2
572,300
16
8
30,756
2
50,095
3
8
257,258
2
155,951
14
7
147,194
2
672,976
32
8
89,894
2
532,962
13
8
10,199
2
549,797
19
8
210,808
2
68,001
50
4
238,151
2
3,253
23
8
43,459
2
776,466
5
8
106,352
2
330,263
5
8
106,351
2
101,244
0
8
0
0
409,774
12
8
275,399
2
466,091
6
4
18,520
2
End of preview.

Dataset Card for resampled_IDS_datasets

Intrusion Detection Systems (IDS) play a crucial role in securing computer networks against malicious activities. However, their efficacy is consistently hindered by the persistent challenge of class imbalance in real-world datasets. While various methods, such as resampling techniques, ensemble methods, cost-sensitive learning, data augmentation, and so on, have individually addressed imbalance classification issues, there exists a notable gap in the literature for effective hybrid methodologies aimed at enhancing IDS performance. To bridge this gap, our research introduces an innovative methodology that integrates hybrid undersampling and oversampling strategies within an ensemble classification framework. This novel approach is designed to harmonize dataset distributions and optimize IDS performance, particularly in intricate multi-class scenarios. In-depth evaluations were conducted using well-established intrusion detection datasets, including the Car Hacking: Attack and Defense Challenge 2020 (CHADC2020) and IoTID20. Our results showcase the remarkable efficacy of the proposed methodology, revealing significant improvements in precision, recall, and F1-score metrics. Notably, the hybrid-ensemble method demonstrated an exemplary average F1 score exceeding 98% for both datasets, underscoring its exceptional capability to substantially enhance intrusion detection accuracy. In summary, this research represents a significant contribution to the field of IDS, providing a robust solution to the pervasive challenge of class imbalance. The hybrid framework not only strengthens IDS efficacy but also illuminates the seamless integration of undersampling and oversampling within ensemble classifiers, paving the way for fortified network defenses.

Dataset Description

We provide resampled datasets based on BorderlineSMOTE method from a part of public dataset Car_Hacking_Challenge_Dataset and IoT Network Intrusion Dataset (IoTID20)

In the Car_Hacking_Challenge_Dataset, we labelled output classes as 'Flooding' as 0 , 'Fuzzing' as 1, 'Normal' as 2, 'Replay' as 3,'Spoofing' as 4.

In the IoTID20 dataset, we labelled output classes as 'DoS-Synflooding' as 0 , 'MITM ARP Spoofing' as 1, 'Mirai ARP Spoofing' as 2, 'Mirai-Hostbruteforceg' as 3, 'Mirai HTTP Flooding' as 4 , 'Mirai UDP Flooding' as 5, 'Scan Host Port' as 6, 'Scan Port OS' as 7, 'Normal' as 8.

Citation

Le, T.T.H, Shin, Y., Kim, M., & Kim, H. (2024). Towards unbalanced multiclass intrusion detection with hybrid sampling methods and ensemble classification. Applied Soft Computing, 157, 111517.

BibTeX:

@article{le2024towards, title={Towards unbalanced multiclass intrusion detection with hybrid sampling methods and ensemble classification}, author={Le, Thi Thu Huong and Shin, Yeongjae and Kim, Myeongkil and Kim, Howon and others}, journal={Applied Soft Computing}, volume={157}, pages={111517}, year={2024}, publisher={Elsevier} }

@misc {le_2025, author = { {Le} }, title = { resampled_IDS_datasets (Revision 45a8285) }, year = 2025, url = { https://huggingface.co/datasets/Thi-Thu-Huong/resampled_IDS_datasets }, doi = { 10.57967/hf/4961 }, publisher = { Hugging Face } }

Dataset Card Contact

Email: [email protected]

Downloads last month
53