File size: 2,938 Bytes
8db3ae3
 
 
 
 
 
 
2cdbade
 
 
 
 
4d067c2
df3a50b
2cdbade
 
 
 
 
 
63d0c56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8266e7b
63d0c56
 
076255c
63d0c56
 
 
 
 
 
 
2cdbade
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: mit
language:
- ar
base_model:
- aubmindlab/bert-base-arabertv02
pipeline_tag: token-classification
---

# SWEET MADAR CODA Model

## Model Description
`CAMeL-Lab/text-editing-coda` is a text editing model tailored for grammatical error correction (GEC) in dialectal Arabic (DA).
The model is based on [AraBERTv02](https://huggingface.co/aubmindlab/bert-base-arabertv02), which we fine-tuned using the [MADAR CODA](https://camel.abudhabi.nyu.edu/madar-coda-corpus/) corpus.
This model was introduced in our ACL 2025 paper, [Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study](https://arxiv.org/abs/2503.00985), where we refer to it as SWEET (Subword Edit Error Tagger).
It achieved SOTA performance on the MADAR CODA dataset. Details about the training procedure, data preprocessing, and hyperparameters are available in the paper.
The fine-tuning code and associated resources are publicly available on our GitHub repository: https://github.com/CAMeL-Lab/text-editing.



## Intended uses
To use the `CAMeL-Lab/text-editing-coda` model, you must clone our text editing [GitHub repository](https://github.com/CAMeL-Lab/text-editing) and follow the installation requirements.
We used this `SWEET` model to report results on the MADAR CODA dev and test sets in our [paper](https://arxiv.org/abs/2503.00985).

## How to use
Clone our text editing [GitHub repository](https://github.com/CAMeL-Lab/text-editing) and follow the installation requirements

```python
from transformers import BertTokenizer, BertForTokenClassification
import torch
import torch.nn.functional as F
from gec.tag import rewrite

tokenizer = BertTokenizer.from_pretrained('CAMeL-Lab/text-editing-coda')
model = BertForTokenClassification.from_pretrained('CAMeL-Lab/text-editing-coda')

text = 'ุฃู†ุง ุจุนุทูŠูƒ ุฑู‚ู… ุชู„ููˆู†ูˆ ูˆ ุนู†ูˆุงู†ูˆ'.split()

tokenized_text = tokenizer(text, return_tensors="pt", is_split_into_words=True)

with torch.no_grad():
    logits = model(**tokenized_text).logits
    preds = F.softmax(logits.squeeze(), dim=-1)
    preds = torch.argmax(preds, dim=-1).cpu().numpy()
    edits = [model.config.id2label[p] for p in preds[1:-1]]
    assert len(edits) == len(tokenized_text['input_ids'][0][1:-1])

print(edits) # ['R_[ุง]K*', 'K*I_[ุง]K', 'K*', 'K*', 'K*', 'K*', 'K*R_[ู‡]', 'K*', 'MK*', 'R_[ู‡]']
subwords = tokenizer.convert_ids_to_tokens(tokenized_text['input_ids'][0][1:-1])
output_sent = rewrite(subwords=[subwords], edits=[edits])[0][0]
print(output_sent) # ุงู†ุง ุจุงุนุทูŠูƒ ุฑู‚ู… ุชู„ููˆู†ู‡ ูˆุนู†ูˆุงู†ู‡
```



## Citation
```bibtex
@inter{alhafni-habash-2025-enhancing,
      title={Enhancing Text Editing for Grammatical Error Correction: Arabic as a Case Study}, 
      author={Bashar Alhafni and Nizar Habash},
      year={2025},
      eprint={2503.00985},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.00985}, 
}
```