update table
Browse files
README.md
CHANGED
@@ -294,7 +294,7 @@ configs:
|
|
294 |
# LlamaLens: Specialized Multilingual LLM Dataset
|
295 |
|
296 |
## Overview
|
297 |
-
LlamaLens is a specialized multilingual LLM designed for analyzing news and social media content. It focuses on
|
298 |
|
299 |
|
300 |
<p align="center"> <img src="./capablities_tasks_datasets.png" style="width: 40%;" id="title-icon"> </p>
|
@@ -304,7 +304,7 @@ This repo includes scripts needed to run our full pipeline, including data prepr
|
|
304 |
|
305 |
### Features
|
306 |
- Multilingual support (Arabic, English, Hindi)
|
307 |
-
-
|
308 |
- Optimized for news and social media content analysis
|
309 |
|
310 |
## 📂 Dataset Overview
|
@@ -334,28 +334,32 @@ This repo includes scripts needed to run our full pipeline, including data prepr
|
|
334 |
|
335 |
## Results
|
336 |
|
337 |
-
Below, we present
|
338 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
339 |
|
340 |
-
| **Task** | **Dataset** | **Metric** | **SOTA** | **Llama-instruct** | **LLamalens** | **Δ** (LLamalens - SOTA) |
|
341 |
-
|----------------------|---------------------------|-----------:|--------:|--------------------:|--------------:|------------------------------:|
|
342 |
-
| News Summarization | xlsum | R-2 | 0.152 | 0.074 | 0.141 | -0.011 |
|
343 |
-
| News Genre | CNN_News_Articles | Acc | 0.940 | 0.644 | 0.915 | -0.025 |
|
344 |
-
| News Genre | News_Category | Ma-F1 | 0.769 | 0.970 | 0.505 | -0.264 |
|
345 |
-
| News Genre | SemEval23T3-ST1 | Mi-F1 | 0.815 | 0.687 | 0.241 | -0.574 |
|
346 |
-
| Subjectivity | CT24_T2 | Ma-F1 | 0.744 | 0.535 | 0.508 | -0.236 |
|
347 |
-
| Emotion | emotion | Ma-F1 | 0.790 | 0.353 | 0.878 | 0.088 |
|
348 |
-
| Sarcasm | News-Headlines | Acc | 0.897 | 0.668 | 0.956 | 0.059 |
|
349 |
-
| Sentiment | NewsMTSC | Ma-F1 | 0.817 | 0.628 | 0.627 | -0.190 |
|
350 |
-
| Checkworthiness | CT24_T1 | F1_Pos | 0.753 | 0.404 | 0.877 | 0.124 |
|
351 |
-
| Claim | claim-detection | Mi-F1 | – | 0.545 | 0.915 | – |
|
352 |
-
| Factuality | News_dataset | Acc | 0.920 | 0.654 | 0.946 | 0.026 |
|
353 |
-
| Factuality | Politifact | W-F1 | 0.490 | 0.121 | 0.290 | -0.200 |
|
354 |
-
| Propaganda | QProp | Ma-F1 | 0.667 | 0.759 | 0.851 | 0.184 |
|
355 |
-
| Cyberbullying | Cyberbullying | Acc | 0.907 | 0.175 | 0.847 | -0.060 |
|
356 |
-
| Offensive | Offensive_Hateful | Mi-F1 | – | 0.692 | 0.805 | – |
|
357 |
-
| Offensive | offensive_language | Mi-F1 | 0.994 | 0.646 | 0.884 | -0.110 |
|
358 |
-
| Offensive & Hate | hate-offensive-speech | Acc | 0.945 | 0.602 | 0.924 | -0.021 |
|
359 |
|
360 |
|
361 |
## File Format
|
|
|
294 |
# LlamaLens: Specialized Multilingual LLM Dataset
|
295 |
|
296 |
## Overview
|
297 |
+
LlamaLens is a specialized multilingual LLM designed for analyzing news and social media content. It focuses on 18 NLP tasks, leveraging 52 datasets across Arabic, English, and Hindi.
|
298 |
|
299 |
|
300 |
<p align="center"> <img src="./capablities_tasks_datasets.png" style="width: 40%;" id="title-icon"> </p>
|
|
|
304 |
|
305 |
### Features
|
306 |
- Multilingual support (Arabic, English, Hindi)
|
307 |
+
- 18 NLP tasks with 52 datasets
|
308 |
- Optimized for news and social media content analysis
|
309 |
|
310 |
## 📂 Dataset Overview
|
|
|
334 |
|
335 |
## Results
|
336 |
|
337 |
+
Below, we present the performance of **L-Lens: LlamaLens** , where *"Eng"* refers to the English-instructed model and *"Native"* refers to the model trained with native language instructions. The results are compared against the SOTA (where available) and the Base: **Llama-Instruct 3.1 baseline**. The **Δ** (Delta) column indicates the difference between LlamaLens and the SOTA performance, calculated as (LlamaLens – SOTA).
|
338 |
+
|
339 |
+
|
340 |
+
| **Task** | **Dataset** | **Metric** | **SOTA** | **Base** | **L-Lens-Eng** | **L-Lens-Native** | **Δ (L-Lens (Eng) - SOTA)** |
|
341 |
+
|:----------------------------------:|:--------------------------------------------:|:----------:|:--------:|:---------------------:|:---------------------:|:--------------------:|:------------------------:|
|
342 |
+
| Checkworthiness Detection | CT24_checkworthy | f1_pos | 0.753 | 0.404 | 0.942 | 0.942 | 0.189 |
|
343 |
+
| Claim Detection | claim-detection | Mi-F1 | -- | 0.545 | 0.864 | 0.889 | -- |
|
344 |
+
| Cyberbullying Detection | Cyberbullying | Acc | 0.907 | 0.175 | 0.836 | 0.855 | -0.071 |
|
345 |
+
| Emotion Detection | emotion | Ma-F1 | 0.790 | 0.353 | 0.803 | 0.808 | 0.013 |
|
346 |
+
| Factuality | News_dataset | Acc | 0.920 | 0.654 | 1.000 | 1.000 | 0.080 |
|
347 |
+
| Factuality | Politifact | W-F1 | 0.490 | 0.121 | 0.287 | 0.311 | -0.203 |
|
348 |
+
| News Categorization | CNN_News_Articles_2011-2022 | Acc | 0.940 | 0.644 | 0.970 | 0.970 | 0.030 |
|
349 |
+
| News Categorization | News_Category_Dataset | Ma-F1 | 0.769 | 0.970 | 0.824 | 0.520 | 0.055 |
|
350 |
+
| News Genre Categorisation | SemEval23T3-subtask1 | Mi-F1 | 0.815 | 0.687 | 0.241 | 0.253 | -0.574 |
|
351 |
+
| News Summarization | xlsum | R-2 | 0.152 | 0.074 | 0.182 | 0.181 | 0.030 |
|
352 |
+
| Offensive Language Detection | Offensive_Hateful_Dataset_New | Mi-F1 | -- | 0.692 | 0.814 | 0.813 | -- |
|
353 |
+
| Offensive Language Detection | offensive_language_dataset | Mi-F1 | 0.994 | 0.646 | 0.899 | 0.893 | -0.095 |
|
354 |
+
| Offensive Language and Hate Speech | hate-offensive-speech | Acc | 0.945 | 0.602 | 0.931 | 0.935 | -0.014 |
|
355 |
+
| Propaganda Detection | QProp | Ma-F1 | 0.667 | 0.759 | 0.963 | 0.973 | 0.296 |
|
356 |
+
| Sarcasm Detection | News-Headlines-Dataset-For-Sarcasm-Detection | Acc | 0.897 | 0.668 | 0.936 | 0.947 | 0.039 |
|
357 |
+
| Sentiment Classification | NewsMTSC-dataset | Ma-F1 | 0.817 | 0.628 | 0.751 | 0.748 | -0.066 |
|
358 |
+
| Subjectivity Detection | clef2024-checkthat-lab | Ma-F1 | 0.744 | 0.535 | 0.642 | 0.628 | -0.102 |
|
359 |
+
|
|
360 |
+
|
361 |
+
---
|
362 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
363 |
|
364 |
|
365 |
## File Format
|