Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -69,6 +69,7 @@ The dataset is divided into three splits:
|
|
69 |
- Train: 7,751 examples
|
70 |
- Validation: 1,661 examples
|
71 |
- Test: 1,662 examples
|
|
|
72 |
Each example in the dataset consists of a sequence of tokens and their corresponding NER tags.
|
73 |
The dataset has been carefully preprocessed to ensure high quality and consistency.
|
74 |
The dataset mostly contains threat and intelligence report descriptions from the Alien Vault
|
|
|
69 |
- Train: 7,751 examples
|
70 |
- Validation: 1,661 examples
|
71 |
- Test: 1,662 examples
|
72 |
+
|
73 |
Each example in the dataset consists of a sequence of tokens and their corresponding NER tags.
|
74 |
The dataset has been carefully preprocessed to ensure high quality and consistency.
|
75 |
The dataset mostly contains threat and intelligence report descriptions from the Alien Vault
|