File size: 2,755 Bytes
1753201 03bfda1 f8c081a 1462eb2 f8c081a 1753201 f8c081a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: train/data.jsonl
- split: test
path: test/test.jsonl
task_categories:
- text-classification
language:
- he
size_categories:
- 10K<n<100K
---
# HebrewSentiment - A Sentiment-Analysis Dataset in Hebrew
## Summary
HebrewSentiment is a Hebrew dataset for the sentiment analysis task.
## Introduction
This dataset was constructed via [To Fill In].
## Dataset Statistics
The table below shows the number of examples from each category in each of the splits:
| split | total | positive | negative | neutral |
|-------|----------|----------|----------|---------|
| train | 39,135 | 8,968 | 7,669 | 22,498 |
| test | 2,170 | 503 | 433 | 1,234 |
## Dataset Description
Each row in the dataset contains the following fields:
- **id**: A unique identifier for that training examples
- **text**: The textual content of the input sentence
- **tag_ids**: The label of the example (`Neutral`/`Positive`/`Negative`)
- **task_name**: [To fill in]
- **campaign_id**: [To fill in]
- **annotator_agreement_strength**: [To fill in]
- **survey_name**: [To fill in]
- **industry**: [To fill in]
- **type**: [To fill in]
## Models and Comparisons
In collaboration with [DICTA](https://dicta.org.il/) we trained a model on this dataset and are happy to release it to the public: [DictaBERT-Sentiment](https://huggingface.co/dicta-il/dictabert-sentiment).
In addition, we compared the performance of the model to the previous existing sentiment dataset - [Hebrew-Sentiment-Data from OnlpLab](https://github.com/OnlpLab/Hebrew-Sentiment-Data).
We fine-tuned [dictabert](https://huggingface.co/dicta-il/dictabert) 3 times - once on the OnlpLab dataset, once on this dataset, and once on both datasets together and the results can be seen in the table below:
| Training Corpus: | OnlpLab | | | | | HebrewSentiment| | | | |
|------------------|------|----------------|------|------|--------|--------------|------|------|---|---|
| | Accuracy | Macro F1 | F1 Positive | F1 Negative | F1 Neutral | Accuracy | Macro F1 | F1 Positive | F1 Negative | F1 Neutral |
| OnlpLab+HebrewSentiment | 87 | 61.7 | 93.2 | 74.6 | 17.4 | 83.9 | 82.7 | 79.8 | 81.8 | 86.4 |
| OnlpLab | 88.2 | 63.3 | 93.8 | 72.1 | 24 | 41.3 | 42.2 | 48.1 | 56.3 | 22.2 |
| HebrewSentiment | 69.9 | 51.7 | 82.2 | 62.9 | 10.2 | 84.4 | 83.2 | 81 | 82.1 | 86.6 |
## Contributors
[To fill in]
Contributors: [To fill in]
## Acknowledgments
We would like to express our gratitude to [To fill in] |