Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
fill-mask | transformers | {} | Contrastive-Tension/BERT-Base-NLI-CT | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Contrastive-Tension/BERT-Base-Swe-CT-STSb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Contrastive-Tension/BERT-Distil-CT-STSb | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Contrastive-Tension/BERT-Distil-CT | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Contrastive-Tension/BERT-Distil-NLI-CT | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Contrastive-Tension/BERT-Large-CT-STSb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Contrastive-Tension/BERT-Large-CT | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Contrastive-Tension/BERT-Large-NLI-CT | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Contrastive-Tension/RoBerta-Large-CT-STSb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Cooker/cicero-similis | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Cool/Demo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null |
@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@article{wan2020old,
title={Old Photo Restoration via Deep Latent Space Translation},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
journal={arXiv preprint arXiv:2009.07047},
year={2020}
}
| {"language": ["en"], "license": "mit", "tags": ["image_restoration", "superresolution"], "thumbnail": "https://github.com/Nick-Harvey/for_my_abuela/blob/master/cuban_large.jpg"} | Coolhand/Abuela | null | [
"image_restoration",
"superresolution",
"en",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Coolhand/Sentiment | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Atakan DialoGPT Model | {"tags": ["conversational"]} | CopymySkill/DialoGPT-medium-atakan | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#DiabloGPT Captain Price (Extended) | {"tags": ["conversational"]} | Corvus/DialoGPT-medium-CaptainPrice-Extended | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Captain Price DialoGPT Model | {"tags": ["conversational"]} | Corvus/DialoGPT-medium-CaptainPrice | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_mlc_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_mlc_v7_distil")
``` | {"language": "en", "license": "mit", "tags": ["multi-label"], "widget": [{"text": "I would like to return these pants and shoes"}]} | CouchCat/ma_mlc_v7_distil | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"multi-label",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are:
- PRD: for certain products
- BRND: for brands
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v6_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v6_distil")
``` | {"language": "en", "license": "mit", "tags": ["ner"], "widget": [{"text": "These shoes from Adidas fit quite well"}]} | CouchCat/ma_ner_v6_distil | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"ner",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
### Description
A Named Entity Recognition model trained on a customer feedback data using DistilBert.
Possible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples:
- PROD: for certain products
- BRND: for brands
- PERS: people names
The following tags are simply in place to help better categorize the previous tags
- MATR: relating to materials, e.g. cloth, leather, seam, etc.
- TIME: time related entities
- MISC: any other entity that might skew the results
### Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v7_distil")
model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v7_distil")
```
| {"language": "en", "license": "mit", "tags": ["ner"], "widget": [{"text": "These shoes I recently bought from Tommy Hilfiger fit quite well. The shirt, however, has got a hole"}]} | CouchCat/ma_ner_v7_distil | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"ner",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
### Description
A Sentiment Analysis model trained on customer feedback data using DistilBert.
Possible sentiments are:
* negative
* neutral
* positive
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil")
``` | {"language": "en", "license": "mit", "tags": ["sentiment-analysis"], "widget": [{"text": "I am disappointed in the terrible quality of my dress"}]} | CouchCat/ma_sa_v7_distil | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"sentiment-analysis",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CoveJH/ConBot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Coverage/sakurajimamai | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | null |
Arthur Morgan DialoGPT Model | {"tags": ["conversational"]} | Coyotl/DialoGPT-test-last-arthurmorgan | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Arthur Morgan DialoGPT Model | {"tags": ["conversational"]} | Coyotl/DialoGPT-test2-arthurmorgan | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | null |
# DialoGPT Arthur Morgan | {"tags": ["conversational"]} | Coyotl/DialoGPT-test3-arthurmorgan | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Craak/GJ0001 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | @Piglin Talks Harry Potter | {"tags": ["conversational"]} | CracklesCreeper/Piglin-Talks-Harry-Potter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Craftified/Bob | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | sentence-transformers |
# A model.
| {"license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "feature-extraction"} | Craig/mGqFiPhu | null | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
feature-extraction | sentence-transformers |
# sentence-transformers/paraphrase-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This is a clone of the original model, with `pipeline_tag` metadata changed to `feature-extraction`, so it can just return the embedded vector. Otherwise it is unchanged.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` | {"license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "feature-extraction"} | Craig/paraphrase-MiniLM-L6-v2 | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Model Finetuned from BERT-base for
- Problem type: Multi-class Classification
- Model ID: 25805800
## Validation Metrics
- Loss: 0.4422711133956909
- Accuracy: 0.8615328555811976
- Macro F1: 0.8642434650461513
- Micro F1: 0.8615328555811976
- Weighted F1: 0.8617743626671308
- Macro Precision: 0.8649112225076049
- Micro Precision: 0.8615328555811976
- Weighted Precision: 0.8625407179375096
- Macro Recall: 0.8640777539828228
- Micro Recall: 0.8615328555811976
- Weighted Recall: 0.8615328555811976
## Usage
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Crasher222/kaggle-comp-test")
tokenizer = AutoTokenizer.from_pretrained("Crasher222/kaggle-comp-test")
inputs = tokenizer("I am in love with you", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Crasher222/autonlp-data-kaggle-test"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 60.744727079482495} | Crasher222/kaggle-comp-test | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Crasher222/autonlp-data-kaggle-test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CrayonShinchan/bart_fine_tune_test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CrayonShinchan/fine_tune_try_1 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | hello
| {} | CrisLeaf/generador-de-historias-de-tolkien | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Crisblair/Wkwk | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Crispy/dialopt-small-kratos | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7814 | 1.0 | 250 | 0.3105 | 0.907 | 0.9046 |
| 0.2401 | 2.0 | 500 | 0.2175 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9215, "name": "Accuracy"}, {"type": "f1", "value": 0.9215538311282218, "name": "F1"}]}]}]} | Crives/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | keras | {} | Crumped/imdb-simpleRNN | null | [
"keras",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CrypticT1tan/DialoGPT-medium-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | #rick DialoGPT Model | {"tags": ["conversational"]} | Cryptikdw/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Crystal/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Paladin Danse DialoGPT Model | {"tags": ["conversational"]} | Cthyllax/DialoGPT-medium-PaladinDanse | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IceBERT-finetuned-ner
This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0807
- Precision: 0.8927
- Recall: 0.8632
- F1: 0.8777
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0544 | 1.0 | 2904 | 0.0774 | 0.8859 | 0.8490 | 0.8670 | 0.9833 |
| 0.0284 | 2.0 | 5808 | 0.0781 | 0.8709 | 0.8590 | 0.8649 | 0.9840 |
| 0.0166 | 3.0 | 8712 | 0.0807 | 0.8927 | 0.8632 | 0.8777 | 0.9850 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "gpl-3.0", "tags": ["generated_from_trainer"], "datasets": ["mim_gold_ner"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "IceBERT-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "mim_gold_ner", "type": "mim_gold_ner", "args": "mim-gold-ner"}, "metrics": [{"type": "precision", "value": 0.8927335640138409, "name": "Precision"}, {"type": "recall", "value": 0.8631855657784682, "name": "Recall"}, {"type": "f1", "value": 0.8777109531620194, "name": "F1"}, {"type": "accuracy", "value": 0.9849836396073506, "name": "Accuracy"}]}]}]} | Culmenus/IceBERT-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:gpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0891
- Precision: 0.8804
- Recall: 0.8517
- F1: 0.8658
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0573 | 1.0 | 2904 | 0.1024 | 0.8608 | 0.8003 | 0.8295 | 0.9799 |
| 0.0307 | 2.0 | 5808 | 0.0899 | 0.8707 | 0.8380 | 0.8540 | 0.9825 |
| 0.0198 | 3.0 | 8712 | 0.0891 | 0.8804 | 0.8517 | 0.8658 | 0.9837 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "agpl-3.0", "tags": ["generated_from_trainer"], "datasets": ["mim_gold_ner"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "XLMR-ENIS-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "mim_gold_ner", "type": "mim_gold_ner", "args": "mim-gold-ner"}, "metrics": [{"type": "precision", "value": 0.8803619696791632, "name": "Precision"}, {"type": "recall", "value": 0.8517339397384878, "name": "Recall"}, {"type": "f1", "value": 0.8658113730929264, "name": "F1"}, {"type": "accuracy", "value": 0.9837103244207861, "name": "Accuracy"}]}]}]} | Culmenus/XLMR-ENIS-finetuned-ner | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:mim_gold_ner",
"license:agpl-3.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Culmenus/checkpoint-168500-finetuned-de-to-is_nr2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_1 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_ancc | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_ekkicc | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2 | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [Infore_25h dataset](https://files.huylenguyen.com/25hours.zip) (Password: BroughtToYouByInfoRe)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("CuongLD/wav2vec2-large-xlsr-vietnamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 58.63 %
## Training
The Common Voice `train`, `validation`, and `Infore_25h` datasets were used for training
The script used for training can be found [here](https://drive.google.com/file/d/1AW9R8IlsapiSGh9n3aECf23t-zhk3wUh/view?usp=sharing)
=======================To here===============================>
Your model in then available under *huggingface.co/CuongLD/wav2vec2-large-xlsr-vietnamese* for everybody to use 🎉.
## How to evaluate my trained checkpoint
Having uploaded your model, you should now evaluate your model in a final step. This should be as simple as
copying the evaluation code of your model card into a python script and running it. Make sure to note
the final result on the model card **both** under the YAML tags at the very top **and** below your evaluation code under "Test Results".
## Rules of training and evaluation
In this section, we will quickly go over what data is allowed to be used as training
data, what kind of data preprocessing is allowed be used, and how the model should be evaluated.
To make it very simple regarding the first point: **All data except the official common voice `test` data set can be used as training data**. For models trained in a language that is not included in Common Voice, the author of the model is responsible to
leave a reasonable amount of data for evaluation.
Second, the rules regarding the preprocessing are not that as straight-forward. It is allowed (and recommended) to
normalize the data to only have lower-case characters. It is also allowed (and recommended) to remove typographical
symbols and punctuation marks. A list of such symbols can *e.g.* be fonud [here](https://en.wikipedia.org/wiki/List_of_typographical_symbols_and_punctuation_marks) - however here we already must be careful. We should **not** remove a symbol that
would change the meaning of the words, *e.g.* in English, we should not remove the single quotation mark `'` since it
would change the meaning of the word `"it's"` to `"its"` which would then be incorrect. So the golden rule here is to
not remove any characters that could change the meaning of a word into another word. This is not always obvious and should
be given some consideration. As another example, it is fine to remove the "Hypen-minus" sign "`-`" since it doesn't change the
meaninng of a word to another one. *E.g.* "`fine-tuning`" would be changed to "`finetuning`" which has still the same meaning.
Since those choices are not always obvious when in doubt feel free to ask on Slack or even better post on the forum, as was
done, *e.g.* [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586).
## Tips and tricks
This section summarizes a couple of tips and tricks across various topics. It will continously be updated during the week.
### How to combine multiple datasets into one
Check out [this](https://discuss.huggingface.co/t/how-to-combine-local-data-files-with-an-official-dataset/4685) post.
### How to effectively preprocess the data
### How to do efficiently load datasets with limited ram and hard drive space
Check out [this](https://discuss.huggingface.co/t/german-asr-fine-tuning-wav2vec2/4558/8?u=patrickvonplaten) post.
### How to do hyperparameter tuning
### How to preprocess and evaluate character based languages
## Further reading material
It is recommended that take some time to read up on how Wav2vec2 works in theory.
Getting a better understanding of the theory and the inner mechanisms of the model often helps when fine-tuning the model.
**However**, if you don't like reading blog posts/papers, don't worry - it is by no means necessary to go through the theory to fine-tune Wav2Vec2 on your language of choice.
If you are interested in learning more about the model though, here are a couple of resources that are important to better understand Wav2Vec2:
- [Facebook's Wav2Vec2 blog post](https://ai.facebook.com/blog/wav2vec-state-of-the-art-speech-recognition-through-self-supervision/)
- [Official Wav2Vec2 paper](https://arxiv.org/abs/2006.11477)
- [Official XLSR Wav2vec2 paper](https://arxiv.org/pdf/2006.13979.pdf)
- [Hugging Face Blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)
- [How does CTC (Connectionist Temporal Classification) work](https://distill.pub/2017/ctc/)
It helps to have a good understanding of the following points:
- How was XLSR-Wav2Vec2 pretrained? -> Feature vectors were masked and had to be predicted by the model; very similar in spirit to masked language model of BERT.
- What parts of XLSR-Wav2Vec2 are responsible for what? What is the feature extractor part used for? -> extract feature vectors from the 1D raw audio waveform; What is the transformer part doing? -> mapping feature vectors to contextualized feature vectors; ...
- What part of the model needs to be fine-tuned? -> The pretrained model **does not** include a language head to classify the contextualized features to letters. This is randomly initialized when loading the pretrained checkpoint and has to be fine-tuned. Also, note that the authors recommend to **not** further fine-tune the feature extractor.
- What data was used to XLSR-Wav2Vec2? The checkpoint we will use for further fine-tuning was pretrained on **53** languages.
- What languages are considered to be similar by XLSR-Wav2Vec2? In the official [XLSR Wav2Vec2 paper](https://arxiv.org/pdf/2006.13979.pdf), the authors show nicely which languages share a common contextualized latent space. It might be useful for you to extend your training data with data of other languages that are considered to be very similar by the model (or you).
## FAQ
- Can a participant fine-tune models for more than one language?
Yes! A participant can fine-tune models in as many languages she/he likes
- Can a participant use extra data (apart from the common voice data)?
Yes! All data except the official common voice `test data` can be used for training.
If a participant wants to train a model on a language that is not part of Common Voice (which
is very much encouraged!), the participant should make sure that some test data is held out to
make sure the model is not overfitting.
- Can we fine-tune for high-resource languages?
Yes! While we do not really recommend people to fine-tune models in English since there are
already so many fine-tuned speech recognition models in English. However, it is very much
appreciated if participants want to fine-tune models in other "high-resource" languages, such
as French, Spanish, or German. For such cases, one probably needs to train locally and apply
might have to apply tricks such as lazy data loading (check the ["Lazy data loading"](#how-to-do-lazy-data-loading) section for more details).
| {"language": "vi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice, infore_25h"], "metrics": ["wer"], "model-index": [{"name": "Cuong-Cong XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice vi", "type": "common_voice", "args": "vi"}, "metrics": [{"type": "wer", "value": 58.63, "name": "Test WER"}]}]}]} | CuongLD/wav2vec2-large-xlsr-vietnamese | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"vi",
"arxiv:2006.11477",
"arxiv:2006.13979",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CurtisASmith/GPT-JRT | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CurtisBowser/DialoGPT-medium-sora-three | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | null |
# Sora DialoGPT Model | {"tags": ["conversational"]} | CurtisBowser/DialoGPT-medium-sora-two | null | [
"pytorch",
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Sora DialoGPT Model
| {"tags": ["conversational"]} | CurtisBowser/DialoGPT-medium-sora | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"has_space"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Sora DialoGPT Model | {"tags": ["conversational"]} | CurtisBowser/DialoGPT-small-sora | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Chandler Bot DialoGPT model | {"tags": ["conversational"]} | CyberMuffin/DialoGPT-small-ChandlerBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Cyrell/Cyrell | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Czapla/Rick | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | D-Keqi/espnet_asr_train_asr_streaming_transformer_raw_en_bpe500_sp_valid.acc.ave | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | D3vil/DialoGPT-smaall-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | D3vil/DialoGPT-smaall-harrypottery | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | D3xter1922/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-finetuned-cola
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6367
- Matthews Correlation: 0.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4139 | 1.0 | 535 | 0.4137 | 0.6381 |
| 0.2452 | 2.0 | 1070 | 0.4887 | 0.6504 |
| 0.17 | 3.0 | 1605 | 0.5335 | 0.6757 |
| 0.1135 | 4.0 | 2140 | 0.6367 | 0.6824 |
| 0.0817 | 5.0 | 2675 | 0.6742 | 0.6755 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "electra-base-discriminator-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.6824089073723449, "name": "Matthews Correlation"}]}]}]} | D3xter1922/electra-base-discriminator-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | D3xter1922/electra-base-discriminator-finetuned-mnli | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | D4RL1NG/yes | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Anakin Skywalker DialoGPT Model | {"tags": ["conversational"]} | DARKVIP3R/DialoGPT-medium-Anakin | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# bert-base-irish-cased-v1
[gaBERT](https://aclanthology.org/2022.lrec-1.511/) is a BERT-base model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper.
## Model description
Encoder-based Transformer to be used to obtain features for finetuning for downstream tasks in Irish.
## Intended uses & limitations
Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
### BibTeX entry and citation info
If you use this model in your research, please consider citing our paper:
```
@inproceedings{barry-etal-2022-gabert,
title = "ga{BERT} {---} an {I}rish Language Model",
author = "Barry, James and
Wagner, Joachim and
Cassidy, Lauren and
Cowap, Alan and
Lynn, Teresa and
Walsh, Abigail and
{\'O} Meachair, M{\'\i}che{\'a}l J. and
Foster, Jennifer",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.511",
pages = "4774--4788",
abstract = "The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.",
}
```
| {"tags": ["generated_from_keras_callback"], "widget": [{"text": "Ceolt\u00f3ir [MASK] ab ea Johnny Cash."}], "model-index": [{"name": "bert-base-irish-cased-v1", "results": []}]} | DCU-NLP/bert-base-irish-cased-v1 | null | [
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# gaELECTRA
[gaELECTRA](https://aclanthology.org/2022.lrec-1.511/) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model.
### Limitations and bias
Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
### BibTeX entry and citation info
If you use this model in your research, please consider citing our paper:
```
@inproceedings{barry-etal-2022-gabert,
title = "ga{BERT} {---} an {I}rish Language Model",
author = "Barry, James and
Wagner, Joachim and
Cassidy, Lauren and
Cowap, Alan and
Lynn, Teresa and
Walsh, Abigail and
{\'O} Meachair, M{\'\i}che{\'a}l J. and
Foster, Jennifer",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.511",
pages = "4774--4788",
abstract = "The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.",
}
``` | {"language": ["ga"], "license": "apache-2.0", "tags": ["irish", "electra"], "widget": [{"text": "Ceolt\u00f3ir [MASK] ab ea Johnny Cash."}]} | DCU-NLP/electra-base-irish-cased-discriminator-v1 | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"irish",
"ga",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# gaELECTRA
[gaELECTRA](https://aclanthology.org/2022.lrec-1.511/) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model.
### Limitations and bias
Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
### BibTeX entry and citation info
If you use this model in your research, please consider citing our paper:
```
@inproceedings{barry-etal-2022-gabert,
title = "ga{BERT} {---} an {I}rish Language Model",
author = "Barry, James and
Wagner, Joachim and
Cassidy, Lauren and
Cowap, Alan and
Lynn, Teresa and
Walsh, Abigail and
{\'O} Meachair, M{\'\i}che{\'a}l J. and
Foster, Jennifer",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.511",
pages = "4774--4788",
abstract = "The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.",
}
``` | {"language": ["ga"], "license": "apache-2.0", "tags": ["irish", "electra"], "widget": [{"text": "Ceolt\u00f3ir [MASK] ab ea Johnny Cash."}]} | DCU-NLP/electra-base-irish-cased-generator-v1 | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"irish",
"ga",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | DHBaek/gpt2-stackoverflow-question-contents-generator | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | DHBaek/xlm-roberta-large-korquad-mask | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
# Danish BERT (uncased) model
[BotXO.ai](https://www.botxo.ai/) developed this model. For data and training details see their [GitHub repository](https://github.com/botxo/nordic_bert).
The original model was trained in TensorFlow then I converted it to Pytorch using [transformers-cli](https://huggingface.co/transformers/converting_tensorflow_models.html?highlight=cli).
For TensorFlow version download here: https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1
## Architecture
```python
from transformers import AutoModelForPreTraining
model = AutoModelForPreTraining.from_pretrained("DJSammy/bert-base-danish-uncased_BotXO,ai")
params = list(model.named_parameters())
print('danish_bert_uncased_v2 has {:} different named parameters.\n'.format(len(params)))
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Last Transformer ====\n')
for p in params[181:197]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[197:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
# danish_bert_uncased_v2 has 206 different named parameters.
# ==== Embedding Layer ====
# bert.embeddings.word_embeddings.weight (32000, 768)
# bert.embeddings.position_embeddings.weight (512, 768)
# bert.embeddings.token_type_embeddings.weight (2, 768)
# bert.embeddings.LayerNorm.weight (768,)
# bert.embeddings.LayerNorm.bias (768,)
# ==== First Transformer ====
# bert.encoder.layer.0.attention.self.query.weight (768, 768)
# bert.encoder.layer.0.attention.self.query.bias (768,)
# bert.encoder.layer.0.attention.self.key.weight (768, 768)
# bert.encoder.layer.0.attention.self.key.bias (768,)
# bert.encoder.layer.0.attention.self.value.weight (768, 768)
# bert.encoder.layer.0.attention.self.value.bias (768,)
# bert.encoder.layer.0.attention.output.dense.weight (768, 768)
# bert.encoder.layer.0.attention.output.dense.bias (768,)
# bert.encoder.layer.0.attention.output.LayerNorm.weight (768,)
# bert.encoder.layer.0.attention.output.LayerNorm.bias (768,)
# bert.encoder.layer.0.intermediate.dense.weight (3072, 768)
# bert.encoder.layer.0.intermediate.dense.bias (3072,)
# bert.encoder.layer.0.output.dense.weight (768, 3072)
# bert.encoder.layer.0.output.dense.bias (768,)
# bert.encoder.layer.0.output.LayerNorm.weight (768,)
# bert.encoder.layer.0.output.LayerNorm.bias (768,)
# ==== Last Transformer ====
# bert.encoder.layer.11.attention.self.query.weight (768, 768)
# bert.encoder.layer.11.attention.self.query.bias (768,)
# bert.encoder.layer.11.attention.self.key.weight (768, 768)
# bert.encoder.layer.11.attention.self.key.bias (768,)
# bert.encoder.layer.11.attention.self.value.weight (768, 768)
# bert.encoder.layer.11.attention.self.value.bias (768,)
# bert.encoder.layer.11.attention.output.dense.weight (768, 768)
# bert.encoder.layer.11.attention.output.dense.bias (768,)
# bert.encoder.layer.11.attention.output.LayerNorm.weight (768,)
# bert.encoder.layer.11.attention.output.LayerNorm.bias (768,)
# bert.encoder.layer.11.intermediate.dense.weight (3072, 768)
# bert.encoder.layer.11.intermediate.dense.bias (3072,)
# bert.encoder.layer.11.output.dense.weight (768, 3072)
# bert.encoder.layer.11.output.dense.bias (768,)
# bert.encoder.layer.11.output.LayerNorm.weight (768,)
# bert.encoder.layer.11.output.LayerNorm.bias (768,)
# ==== Output Layer ====
# bert.pooler.dense.weight (768, 768)
# bert.pooler.dense.bias (768,)
# cls.predictions.bias (32000,)
# cls.predictions.transform.dense.weight (768, 768)
# cls.predictions.transform.dense.bias (768,)
# cls.predictions.transform.LayerNorm.weight (768,)
# cls.predictions.transform.LayerNorm.bias (768,)
# cls.seq_relationship.weight (2, 768)
# cls.seq_relationship.bias (2,)
```
## Example Pipeline
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='DJSammy/bert-base-danish-uncased_BotXO,ai')
unmasker('København er [MASK] i Danmark.')
# Copenhagen is the [MASK] of Denmark.
# =>
# [{'score': 0.788068950176239,
# 'sequence': '[CLS] københavn er hovedstad i danmark. [SEP]',
# 'token': 12610,
# 'token_str': 'hovedstad'},
# {'score': 0.07606703042984009,
# 'sequence': '[CLS] københavn er hovedstaden i danmark. [SEP]',
# 'token': 8108,
# 'token_str': 'hovedstaden'},
# {'score': 0.04299738258123398,
# 'sequence': '[CLS] københavn er metropol i danmark. [SEP]',
# 'token': 23305,
# 'token_str': 'metropol'},
# {'score': 0.008163209073245525,
# 'sequence': '[CLS] københavn er ikke i danmark. [SEP]',
# 'token': 89,
# 'token_str': 'ikke'},
# {'score': 0.006238455418497324,
# 'sequence': '[CLS] københavn er ogsa i danmark. [SEP]',
# 'token': 25253,
# 'token_str': 'ogsa'}]
```
| {"language": "da", "license": "cc-by-4.0", "tags": ["bert", "masked-lm"], "datasets": ["common_crawl", "wikipedia"], "pipeline_tag": "fill-mask", "widget": [{"text": "K\u00f8benhavn er [MASK] i Danmark."}]} | DJSammy/bert-base-danish-uncased_BotXO-ai | null | [
"transformers",
"pytorch",
"jax",
"bert",
"masked-lm",
"fill-mask",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers | {} | DJSammy/bert-base-swedish-uncased_BotXO-ai | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | DJStomp/TestingSalvoNET | null | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | DKpro000/DialoGPT-medium-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | DKpro000/DialoGPT-small-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | DLNLP/t5-small-finetuned-xsum | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | DSI/TweetBasedSA | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | DSI/ar_emotion_6 | null | [
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | ** Human-Directed Sentiment Analysis in Arabic
A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion. | {} | DSI/human-directed-sentiment | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | DSI/personal_sentiment | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English.
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
| {"language": ["multilingual", "nl", "fr", "en"], "tags": ["Tweets", "Sentiment analysis"], "widget": [{"text": "I really wish I could leave my house after midnight, this makes no sense!"}]} | DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"Tweets",
"Sentiment analysis",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
| {"language": ["multilingual", "nl", "fr", "en"], "tags": ["Dutch", "French", "English", "Tweets", "Topic classification"], "widget": [{"text": "I really can't wait for this lockdown to be over and go back to waking up early."}]} | DTAI-KULeuven/mbert-corona-tweets-belgium-topics | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"Dutch",
"French",
"English",
"Tweets",
"Topic classification",
"multilingual",
"nl",
"fr",
"en",
"arxiv:2104.09947",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | this model |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
| {"language": "nl", "license": "mit", "tags": ["Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje"], "datasets": ["oscar", "dbrd", "lassy-ud", "europarl-mono", "conll2002"], "thumbnail": "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png", "widget": [{"text": "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."}]} | DTAI-KULeuven/robbertje-1-gb-bort | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"nl",
"dataset:oscar",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | this model |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
| {"language": "nl", "license": "mit", "tags": ["Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje"], "datasets": ["oscar", "oscar (NL)", "dbrd", "lassy-ud", "europarl-mono", "conll2002"], "thumbnail": "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png", "widget": [{"text": "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."}]} | DTAI-KULeuven/robbertje-1-gb-merged | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"nl",
"arxiv:2101.05716",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | this model |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
| {"language": "nl", "license": "mit", "tags": ["Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje"], "datasets": ["oscar", "dbrd", "lassy-ud", "europarl-mono", "conll2002"], "thumbnail": "https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png", "widget": [{"text": "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."}]} | DTAI-KULeuven/robbertje-1-gb-non-shuffled | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"nl",
"dataset:oscar",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | this model |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
| {"language": "nl", "license": "mit", "tags": ["Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje"], "datasets": ["oscar", "oscar (NL)", "dbrd", "lassy-ud", "europarl-mono", "conll2002"], "thumbnail": "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png", "widget": [{"text": "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."}]} | DTAI-KULeuven/robbertje-1-gb-shuffled | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"nl",
"arxiv:2101.05716",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Danish BERT for emotion detection
The BERT Emotion model detects whether a Danish text is emotional or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-binary-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. | {"language": ["da"], "license": "apache-2.0", "widget": [{"text": "Der er et tr\u00e6 i haven."}]} | alexandrainst/da-binary-emotion-classification-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Danish BERT for emotion classification
The BERT Emotion model classifies a Danish text in one of the following class:
* Glæde/Sindsro
* Tillid/Accept
* Forventning/Interrese
* Overasket/Målløs
* Vrede/Irritation
* Foragt/Modvilje
* Sorg/trist
* Frygt/Bekymret
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
This model should be used after detecting whether the text contains emotion or not, using the binary [BERT Emotion model](https://huggingface.co/alexandrainst/da-binary-emotion-classification-base).
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. | {"language": ["da"], "license": "apache-2.0", "widget": [{"text": "Jeg ejer en r\u00f8d bil og det er en god bil."}]} | alexandrainst/da-emotion-classification-base | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Danish BERT for hate speech classification
The BERT HateSpeech model classifies offensive Danish text into 4 categories:
* `Særlig opmærksomhed` (special attention, e.g. threat)
* `Personangreb` (personal attack)
* `Sprogbrug` (offensive language)
* `Spam & indhold` (spam)
This model is intended to be used after the [BERT HateSpeech detection model](https://huggingface.co/alexandrainst/da-hatespeech-detection-base).
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. | {"language": ["da"], "license": "apache-2.0", "widget": [{"text": "Senile gamle idiot"}]} | alexandrainst/da-hatespeech-classification-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Danish BERT for hate speech (offensive language) detection
The BERT HateSpeech model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#bertdr) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. | {"language": ["da"], "license": "apache-2.0", "widget": [{"text": "Senile gamle idiot"}]} | alexandrainst/da-hatespeech-detection-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
# BERT fine-tuned for Named Entity Recognition in Danish
The model tags tokens (in Danish sentences) with named entity tags (BIO format) [PER, ORG, LOC, MISC].
The pretrained language model used for fine-tuning is the [Danish BERT](https://github.com/certainlyio/nordic_bert) by BotXO.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ner.html#bert) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForTokenClassification
model = BertForTokenClassification.from_pretrained("alexandrainst/da-ner-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-ner-base")
```
## Training Data
The model has been trained on the [DaNE](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dane). | {"language": ["da"], "license": "apache-2.0", "datasets": ["dane"], "widget": [{"text": "Jens Peter Hansen kommer fra Danmark"}]} | alexandrainst/da-ner-base | null | [
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"da",
"dataset:dane",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Model Card for Danish BERT
Danish BERT Tone for sentiment polarity detection
# Model Details
## Model Description
The BERT Tone model detects sentiment polarity (positive, neutral or negative) in Danish texts. It has been finetuned on the pretrained Danish BERT model by BotXO.
- **Developed by:** DaNLP
- **Shared by [Optional]:** Hugging Face
- **Model type:** Text Classification
- **Language(s) (NLP):** Danish (da)
- **License:** cc-by-sa-4.0
- **Related Models:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/certainlyio/nordic_bert)
- [Associated Documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone)
# Uses
## Direct Use
This model can be used for text classification
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets.
## Training Procedure
### Preprocessing
It has been finetuned on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO.
### Speeds, Sizes, Times
More information needed.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed.
### Factors
### Metrics
F1
## Results
More information needed.
# Model Examination
More information needed.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed.
- **Hours used:** More information needed.
- **Cloud Provider:** More information needed.
- **Compute Region:** More information needed.
- **Carbon Emitted:** More information needed.
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed.
## Compute Infrastructure
More information needed.
### Hardware
More information needed.
### Software
More information needed.
# Citation
**BibTeX:**
More information needed.
**APA:**
More information needed.
# Glossary [optional]
More information needed.
# More Information [optional]
More information needed.
# Model Card Authors [optional]
DaNLP in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-sentiment-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-sentiment-base")
```
</details> | {"language": ["da"], "license": "apache-2.0", "widget": [{"text": "Det er super godt"}]} | alexandrainst/da-sentiment-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Danish BERT Tone for the detection of subjectivity/objectivity
The BERT Tone model detects whether a text (in Danish) is subjective or objective.
The model is based on the finetuning of the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-subjectivivity-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-subjectivivity-classification-base")
```
## Training data
The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets. | {"language": ["da"], "license": "apache-2.0", "datasets": ["DDSC/twitter-sent", "DDSC/europarl"], "widget": [{"text": "Jeg tror alligvel, det bliver godt"}]} | alexandrainst/da-subjectivivity-classification-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"da",
"dataset:DDSC/twitter-sent",
"dataset:DDSC/europarl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# Danish ELECTRA for hate speech (offensive language) detection
The ELECTRA Offensive model detects whether a Danish text is offensive or not.
It is based on the pretrained [Danish Ælæctra](Maltehb/aelaectra-danish-electra-small-cased) model.
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/hatespeech.html#electra) for more details.
Here is how to use the model:
```python
from transformers import ElectraTokenizer, ElectraForSequenceClassification
model = ElectraForSequenceClassification.from_pretrained("alexandrainst/da-hatespeech-detection-small")
tokenizer = ElectraTokenizer.from_pretrained("alexandrainst/da-hatespeech-detection-small")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio. | {"language": ["da"], "license": "apache-2.0", "widget": [{"text": "Senile gamle idiot"}]} | alexandrainst/da-hatespeech-detection-small | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# XLM-Roberta fine-tuned for Named Entity Disambiguation
Given a sentence and a knowledge graph context, the model detects whether a specific entity (represented by the knowledge graph context) is mentioned in the sentence (binary classification).
The base language model used is the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base).
Here is how to use the model:
```python
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
model = XLMRobertaForSequenceClassification.from_pretrained("alexandrainst/da-ned-base")
tokenizer = XLMRobertaTokenizer.from_pretrained("alexandrainst/da-ned-base")
```
The tokenizer takes 2 strings has input: the sentence and the knowledge graph (KG) context.
Here is an example:
```python
sentence = "Karen Blixen vendte tilbage til Danmark, hvor hun boede resten af sit liv på Rungstedlund, som hun arvede efter sin mor i 1939"
kg_context = "udmærkelser modtaget Kritikerprisen udmærkelser modtaget Tagea Brandts Rejselegat udmærkelser modtaget Ingenio et arti udmærkelser modtaget Holbergmedaljen udmærkelser modtaget De Gyldne Laurbær mor Ingeborg Dinesen ægtefælle Bror von Blixen-Finecke køn kvinde Commons-kategori Karen Blixen LCAuth no95003722 VIAF 90663542 VIAF 121643918 GND-identifikator 118637878 ISNI 0000 0001 2096 6265 ISNI 0000 0003 6863 4408 ISNI 0000 0001 1891 0457 fødested Rungstedlund fødested Rungsted dødssted Rungstedlund dødssted København statsborgerskab Danmark NDL-nummer 00433530 dødsdato +1962-09-07T00:00:00Z dødsdato +1962-01-01T00:00:00Z fødselsdato +1885-04-17T00:00:00Z fødselsdato +1885-01-01T00:00:00Z AUT NKC jn20000600905 AUT NKC jo2015880827 AUT NKC xx0196181 emnets hovedkategori Kategori:Karen Blixen tilfælde af menneske billede Karen Blixen cropped from larger original.jpg IMDb-identifikationsnummer nm0227598 Freebase-ID /m/04ymd8w BNF 118857710 beskæftigelse skribent beskæftigelse selvbiograf beskæftigelse novelleforfatter ..."
```
A KG context, for a specific entity, can be generated from its Wikidata page.
In the previous example, the KG context is a string representation of the Wikidata page of [Karen Blixen (QID=Q182804)](https://www.wikidata.org/wiki/Q182804).
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/ned.html#xlmr) for more details about how to generate a KG context.
## Training Data
The model has been trained on the [DaNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#daned) and [DaWikiNED](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#dawikined) datasets. | {"language": ["da"], "license": "apache-2.0"} | alexandrainst/da-ned-base | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"text-classification",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.