Update README.md
Browse files
README.md
CHANGED
|
@@ -1,31 +1,83 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
tags:
|
| 4 |
-
- multi-label-classification
|
| 5 |
-
- text-classification
|
| 6 |
-
- onnx
|
| 7 |
-
- web-classification
|
| 8 |
-
- firefox-ai
|
| 9 |
-
- preview
|
| 10 |
language:
|
| 11 |
-
- multilingual
|
| 12 |
datasets:
|
| 13 |
-
- tshasan/multi-label-web-classification
|
| 14 |
base_model: Alibaba-NLP/gte-modernbert-base
|
| 15 |
pipeline_tag: text-classification
|
| 16 |
---
|
| 17 |
|
| 18 |
-
#
|
| 19 |
|
| 20 |
## Model Overview
|
| 21 |
|
| 22 |
-
This is a **preview version** of a multi-label web classification model fine-tuned from `Alibaba-NLP/gte-modernbert-base
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
- **Developed by**: Taimur Hasan
|
| 25 |
-
- **Model Type**: Multi-label Text Classification
|
| 26 |
-
- **Status**: Preview (under active development
|
| 27 |
### Architecture
|
| 28 |
-
|
|
|
|
| 29 |
- **Problem Type**: Multi-label classification
|
| 30 |
-
- **Output Labels**:
|
| 31 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-4.0
|
| 3 |
tags:
|
| 4 |
+
- multi-label-classification
|
| 5 |
+
- text-classification
|
| 6 |
+
- onnx
|
| 7 |
+
- web-classification
|
| 8 |
+
- firefox-ai
|
| 9 |
+
- preview
|
| 10 |
language:
|
| 11 |
+
- multilingual
|
| 12 |
datasets:
|
| 13 |
+
- tshasan/multi-label-web-classification
|
| 14 |
base_model: Alibaba-NLP/gte-modernbert-base
|
| 15 |
pipeline_tag: text-classification
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# URL-TITLE-classifier-preview
|
| 19 |
|
| 20 |
## Model Overview
|
| 21 |
|
| 22 |
+
This is a **preview version** of a multi-label web classification model fine-tuned from [`Alibaba-NLP/gte-modernbert-base`](https://huggingface.co/Alibaba-NLP/gte-modernbert-base). It classifies websites into multiple categories based on their URLs and titles.
|
| 23 |
+
|
| 24 |
+
The model supports **11 labels**:
|
| 25 |
+
`Uncategorized`, `News`, `Entertainment`, `Shop`, `Chat`, `Education`, `Government`, `Health`, `Technology`, `Work`, and `Travel`.
|
| 26 |
+
|
| 27 |
+
- **Developed by**: Taimur Hasan
|
| 28 |
+
- **Model Type**: Multi-label Text Classification
|
| 29 |
+
- **Status**: Preview (under active development)
|
| 30 |
|
|
|
|
|
|
|
|
|
|
| 31 |
### Architecture
|
| 32 |
+
|
| 33 |
+
- **Fine-tuning Strategy**: Unfroze the last 4 encoder layers and the pooler
|
| 34 |
- **Problem Type**: Multi-label classification
|
| 35 |
+
- **Output Labels**:
|
| 36 |
+
- `News`, `Entertainment`, `Shop`, `Chat`, `Education`, `Government`, `Health`, `Technology`, `Work`, `Travel`, `Uncategorized`
|
| 37 |
+
- **Input Format**: Concatenated string:
|
| 38 |
+
`"{url}:{title}"`
|
| 39 |
+
|
| 40 |
+
---
|
| 41 |
+
|
| 42 |
+
## Evaluation Metrics (Validation Data)
|
| 43 |
+
|
| 44 |
+
| Metric | Value |
|
| 45 |
+
|-----------------------|--------|
|
| 46 |
+
| **Loss** | 0.207 |
|
| 47 |
+
| **Hamming Loss** | 0.083 |
|
| 48 |
+
| **Exact Match** | 0.445 |
|
| 49 |
+
| **Precision (Micro)** | 0.917 |
|
| 50 |
+
| **Recall (Micro)** | 0.917 |
|
| 51 |
+
| **F1 Score (Micro)** | 0.917 |
|
| 52 |
+
| **Precision (Macro)** | 0.795 |
|
| 53 |
+
| **Recall (Macro)** | 0.598 |
|
| 54 |
+
| **F1 Score (Macro)** | 0.677 |
|
| 55 |
+
| **Precision (Weighted)** | 0.798 |
|
| 56 |
+
| **Recall (Weighted)** | 0.647 |
|
| 57 |
+
| **F1 Score (Weighted)** | 0.711 |
|
| 58 |
+
| **ROC AUC (Micro)** | 0.941 |
|
| 59 |
+
| **ROC AUC (Macro)** | 0.928 |
|
| 60 |
+
| **PR AUC (Micro)** | 0.815 |
|
| 61 |
+
| **PR AUC (Macro)** | 0.765 |
|
| 62 |
+
| **Jaccard (Micro)** | 0.848 |
|
| 63 |
+
| **Jaccard (Macro)** | 0.520 |
|
| 64 |
+
|
| 65 |
+
### Per-Label F1 Scores
|
| 66 |
+
|
| 67 |
+
| Label | F1 Score |
|
| 68 |
+
|----------------|----------|
|
| 69 |
+
| News | 0.605 |
|
| 70 |
+
| Entertainment | 0.764 |
|
| 71 |
+
| Shop | 0.704 |
|
| 72 |
+
| Chat | 0.875 |
|
| 73 |
+
| Education | 0.763 |
|
| 74 |
+
| Government | 0.667 |
|
| 75 |
+
| Health | 0.574 |
|
| 76 |
+
| Technology | 0.738 |
|
| 77 |
+
| Work | 0.527 |
|
| 78 |
+
| Travel | 0.571 |
|
| 79 |
+
| Uncategorized | 0.657 |
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
> **Note:** This model is in preview and may not generalize well outside of its training dataset. Feedback and contributions are welcome.
|