modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
textattack/distilbert-base-uncased-SST-2 | 6fea14f6264ea28d8405573dac228b3e11137643 | 2020-06-09T16:48:10.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-SST-2 | 19 | null | transformers | 8,600 | Entry not found |
tupleblog/generate-thai-lyrics | e1a1c4732f79938fdfbd3934563f685540532b91 | 2021-08-09T23:06:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"th",
"transformers"
] | text-generation | false | tupleblog | null | tupleblog/generate-thai-lyrics | 19 | 1 | transformers | 8,601 | ---
language:
- th
widget:
- text: "ความรัก"
- text: "อยากรู้"
- text: "ไหนว่า"
---
# Generate Thai Lyrics (แต่งเพลงไทยด้วย GPT-2)
GPT-2 for Thai lyrics generation. We use [GPT-2 base Thai](https://huggingface.co/flax-community/gpt2-base-thai) as a pre-trained model
for [Siamzone lyrics](https://www.siamzone.com/music/thailyric/)
เราเทรนโมเดล GPT-2 สำหรับใช้แต่งเนื้อเพลงไทยด้วยเนื้อเพลงจากเว็บไซต์ Siamzone
## Example use
``` py
from transformers import pipeline
from transformers import GPT2Model, GPT2TokenizerFast, AutoModelForCausalLM, AutoTokenizer
model_name = "tupleblog/generate-thai-lyrics"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model.config.pad_token_id = model.config.eos_token_id
nlp = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
text = "ความรัก"
nlp(text, max_length=100, top_k=40, temperature=0.8) # varying the temperature and top-k produce different output
```
|
vinko/shitposting-AI | 3e0c5e065c6dac5d2b8acf25fcdb186091e36166 | 2022-07-12T09:33:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | vinko | null | vinko/shitposting-AI | 19 | 1 | transformers | 8,602 | Entry not found |
yhavinga/mt5-base-cnn-nl | b2764d1d7d0eb947d650e1ff006fcf67ca91d1cb | 2021-03-05T07:48:08.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dutch",
"dataset:cnn_dm_nl",
"transformers",
"summarization",
"autotrain_compatible"
] | summarization | false | yhavinga | null | yhavinga/mt5-base-cnn-nl | 19 | null | transformers | 8,603 | ---
tags:
- summarization
language:
- dutch
datasets:
- cnn_dm_nl
widget:
- text: "(CNN) Skywatchers in West-Noord-Amerika zijn in voor een traktatie: een bijna vijf minuten totale maansverduistering vanmorgen. Hier is hoe het zich ontvouwt:. Het begon om 3:16 a.m. Pacific Daylight Tijd, toen de maan begon te bewegen in de schaduw van de Aarde. Voor het volgende uur en 45 minuten, die schaduw zal bewegen over de maan en verzwolgen het om 4:58 a.m. Pacific Time. De totale verduistering zal slechts vier minuten en 43 seconden duren, en NASA zegt dat maakt het de kortste van de eeuw. Kijken live op NASA TV. Terwijl mensen ten westen van de Mississippi River zal het beste uitzicht hebben, ten minste een gedeeltelijke verduistering zal zichtbaar zijn over de hele natie. Maar zonsopgang zal de show te onderbreken op de Oostkust. Delen van Zuid-Amerika, India, China en China Een maansverduistering gebeurt wanneer de zon, de aarde en de maan een rechte lijn vormen in de ruimte, met de aarde in het midden. De zon schijnt op de Aarde en creëert een schaduw. Als de maan dieper in die schaduw beweegt, lijkt het donker te worden en lijkt zelfs een roodachtige kleur te zijn. Waarom rood? Omdat de atmosfeer van de Aarde het grootste deel van het blauwe licht filtert. Sommige mensen hebben het effect van de \"bloedmaan\" bijgenaamd. NASA zegt dat maansverduisteringen meestal ten minste twee keer per jaar plaatsvinden, maar deze verduistering is de derde in een reeks van vier op een rij, bekend als een \"tetrad.\" De eerste was op 15 april 2014. De tweede was in september 2014, de volgende is zaterdag en er zal er een meer zijn, op 28 september. Als je meer wilt weten over de verduistering, NASA astronoom Mitzi Adam. Deel uw foto's met CNN iReport."
- text: "(CNN) Filipino's worden gewaarschuwd om op wacht te staan voor flash overstromingen en aardverschuivingen als tropische storm Maysak benaderde de Aziatische eiland natie zaterdag. Slechts een paar dagen geleden, Maysak kreeg super tyfoon status dankzij zijn aanhoudende 150 km/h winden. Het heeft sindsdien verloren veel stoom als het naar het westen in de Stille Oceaan heeft gedraaid. Het is nu geclassificeerd als een tropische storm, volgens de Filipijnse nationale weerdienst, die noemt het een andere naam, Chedeng. Het heeft stabiele winden van meer dan 70 km/h (115 km/h) en gusts tot 90 km/h vanaf 17.00 uur (5 uur ET) Zaterdag. Toch, dat betekent niet dat Maysak zal geen pak een wallop. Autoriteiten nam preventieve stappen om mensen veilig te houden zoals barring outdoor activiteiten zoals zwemmen, surfen, di. Gabriel Llave, een ramp ambtenaar, vertelde PNA dat toeristen die aankomen zaterdag in en rond de kustplaats van Aurora \"zal niet worden geaccepteerd door de eigenaren van hotels, resorts, herbergen en dergelijke... en zal worden geadviseerd om terug te keren naar hun respectievelijke plaatsen.\" Aldczar Aurelio, een meteoroloog met de Filippijnse Atmosferische, Geofysische en Astronomische Diensten Administratie (PAGASA), zei dat de storm was gecentreerd 200 mijl ten zuidwesten van de provincie Aurora vanaf 5 uur (5 uur ET) en richting het westen op een 12.5 mph clip. Het is verwacht dat landval zondagochtend maken op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen tegen maandag. Ahead van de storm. Isabela Gov. Faustino Dry III waarschuwde zaterdag dat bewoners moet handelen als deze zal maken landfall zondagochtend op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen voor maandag."
---
# mt5-base-cnn-nl
mt5-base finetuned on CNN DM translated to nl (Dutch).
* Learning rate 1e-3
* Trained for 1 epoch
* Max source length 1024
* Max target length 142
* rouge1 31.1766
* rouge2 8.4538
* rougeL 17.8674
|
wietsedv/xlm-roberta-base-ft-udpos28-pt | 904fe6c9b3641c0a77ac9dcd37e6466ed59c1c04 | 2022-02-25T09:59:14.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"pt",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-pt | 19 | null | transformers | 8,604 |
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-pt
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 87.1
- type: accuracy
name: Dutch Test accuracy
value: 87.5
- type: accuracy
name: German Test accuracy
value: 80.5
- type: accuracy
name: Italian Test accuracy
value: 88.7
- type: accuracy
name: French Test accuracy
value: 89.7
- type: accuracy
name: Spanish Test accuracy
value: 91.8
- type: accuracy
name: Russian Test accuracy
value: 88.6
- type: accuracy
name: Swedish Test accuracy
value: 87.7
- type: accuracy
name: Norwegian Test accuracy
value: 82.5
- type: accuracy
name: Danish Test accuracy
value: 88.6
- type: accuracy
name: Low Saxon Test accuracy
value: 54.8
- type: accuracy
name: Akkadian Test accuracy
value: 36.5
- type: accuracy
name: Armenian Test accuracy
value: 83.9
- type: accuracy
name: Welsh Test accuracy
value: 64.8
- type: accuracy
name: Old East Slavic Test accuracy
value: 77.4
- type: accuracy
name: Albanian Test accuracy
value: 77.8
- type: accuracy
name: Slovenian Test accuracy
value: 78.3
- type: accuracy
name: Guajajara Test accuracy
value: 26.3
- type: accuracy
name: Kurmanji Test accuracy
value: 76.9
- type: accuracy
name: Turkish Test accuracy
value: 77.2
- type: accuracy
name: Finnish Test accuracy
value: 82.8
- type: accuracy
name: Indonesian Test accuracy
value: 85.3
- type: accuracy
name: Ukrainian Test accuracy
value: 85.4
- type: accuracy
name: Polish Test accuracy
value: 85.7
- type: accuracy
name: Portuguese Test accuracy
value: 94.2
- type: accuracy
name: Kazakh Test accuracy
value: 81.4
- type: accuracy
name: Latin Test accuracy
value: 77.9
- type: accuracy
name: Old French Test accuracy
value: 64.7
- type: accuracy
name: Buryat Test accuracy
value: 59.9
- type: accuracy
name: Kaapor Test accuracy
value: 22.5
- type: accuracy
name: Korean Test accuracy
value: 60.8
- type: accuracy
name: Estonian Test accuracy
value: 84.5
- type: accuracy
name: Croatian Test accuracy
value: 86.3
- type: accuracy
name: Gothic Test accuracy
value: 30.9
- type: accuracy
name: Swiss German Test accuracy
value: 45.7
- type: accuracy
name: Assyrian Test accuracy
value: 16.1
- type: accuracy
name: North Sami Test accuracy
value: 40.7
- type: accuracy
name: Naija Test accuracy
value: 41.6
- type: accuracy
name: Latvian Test accuracy
value: 85.1
- type: accuracy
name: Chinese Test accuracy
value: 31.0
- type: accuracy
name: Tagalog Test accuracy
value: 72.0
- type: accuracy
name: Bambara Test accuracy
value: 32.3
- type: accuracy
name: Lithuanian Test accuracy
value: 83.5
- type: accuracy
name: Galician Test accuracy
value: 88.0
- type: accuracy
name: Vietnamese Test accuracy
value: 64.4
- type: accuracy
name: Greek Test accuracy
value: 83.8
- type: accuracy
name: Catalan Test accuracy
value: 91.7
- type: accuracy
name: Czech Test accuracy
value: 87.3
- type: accuracy
name: Erzya Test accuracy
value: 47.9
- type: accuracy
name: Bhojpuri Test accuracy
value: 51.4
- type: accuracy
name: Thai Test accuracy
value: 44.9
- type: accuracy
name: Marathi Test accuracy
value: 85.9
- type: accuracy
name: Basque Test accuracy
value: 75.7
- type: accuracy
name: Slovak Test accuracy
value: 88.8
- type: accuracy
name: Kiche Test accuracy
value: 35.6
- type: accuracy
name: Yoruba Test accuracy
value: 29.2
- type: accuracy
name: Warlpiri Test accuracy
value: 33.6
- type: accuracy
name: Tamil Test accuracy
value: 83.7
- type: accuracy
name: Maltese Test accuracy
value: 31.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.6
- type: accuracy
name: Icelandic Test accuracy
value: 80.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.7
- type: accuracy
name: Urdu Test accuracy
value: 66.6
- type: accuracy
name: Romanian Test accuracy
value: 84.8
- type: accuracy
name: Persian Test accuracy
value: 75.2
- type: accuracy
name: Apurina Test accuracy
value: 40.8
- type: accuracy
name: Japanese Test accuracy
value: 16.5
- type: accuracy
name: Hungarian Test accuracy
value: 84.5
- type: accuracy
name: Hindi Test accuracy
value: 73.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 22.7
- type: accuracy
name: Komi Permyak Test accuracy
value: 51.0
- type: accuracy
name: Faroese Test accuracy
value: 77.3
- type: accuracy
name: Sanskrit Test accuracy
value: 36.3
- type: accuracy
name: Livvi Test accuracy
value: 63.2
- type: accuracy
name: Arabic Test accuracy
value: 78.7
- type: accuracy
name: Wolof Test accuracy
value: 38.6
- type: accuracy
name: Bulgarian Test accuracy
value: 88.5
- type: accuracy
name: Akuntsu Test accuracy
value: 29.8
- type: accuracy
name: Makurap Test accuracy
value: 17.1
- type: accuracy
name: Kangri Test accuracy
value: 46.0
- type: accuracy
name: Breton Test accuracy
value: 65.4
- type: accuracy
name: Telugu Test accuracy
value: 83.2
- type: accuracy
name: Cantonese Test accuracy
value: 37.6
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 57.9
- type: accuracy
name: Karelian Test accuracy
value: 71.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 76.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 65.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 43.2
- type: accuracy
name: Irish Test accuracy
value: 69.0
- type: accuracy
name: Nayini Test accuracy
value: 42.3
- type: accuracy
name: Munduruku Test accuracy
value: 19.9
- type: accuracy
name: Manx Test accuracy
value: 36.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 38.3
- type: accuracy
name: Afrikaans Test accuracy
value: 82.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 32.1
- type: accuracy
name: Belarusian Test accuracy
value: 86.6
- type: accuracy
name: Serbian Test accuracy
value: 87.9
- type: accuracy
name: Moksha Test accuracy
value: 44.4
- type: accuracy
name: Western Armenian Test accuracy
value: 79.7
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 58.1
- type: accuracy
name: Khunsari Test accuracy
value: 50.0
- type: accuracy
name: Hebrew Test accuracy
value: 93.8
- type: accuracy
name: Uyghur Test accuracy
value: 75.2
- type: accuracy
name: Chukchi Test accuracy
value: 34.9
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Portuguese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pt")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pt")
```
|
cnicu/t5-small-booksum | 0169a67dc4873b529ca3612bdc9d79365632816a | 2022-02-26T21:32:52.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"summary",
"license:mit",
"autotrain_compatible"
] | summarization | false | cnicu | null | cnicu/t5-small-booksum | 19 | 2 | transformers | 8,605 | ---
license: mit
tags:
- summarization
- summary
datasets:
- kmfoda/booksum
---
|
ali2066/distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57 | ae34543efff9a179fa156e6d8f681753fbf9783c | 2022-03-01T14:18:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57 | 19 | null | transformers | 8,606 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5923
- Precision: 0.0039
- Recall: 0.0212
- F1: 0.0066
- Accuracy: 0.7084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6673 | 0.0476 | 0.0128 | 0.0202 | 0.6652 |
| No log | 2.0 | 20 | 0.6211 | 0.0 | 0.0 | 0.0 | 0.6707 |
| No log | 3.0 | 30 | 0.6880 | 0.0038 | 0.0128 | 0.0058 | 0.6703 |
| No log | 4.0 | 40 | 0.6566 | 0.0030 | 0.0128 | 0.0049 | 0.6690 |
| No log | 5.0 | 50 | 0.6036 | 0.0 | 0.0 | 0.0 | 0.6868 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35 | 50bdc5dc06be09ed566294504a84e088e4b141b5 | 2022-03-01T14:20:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35 | 19 | null | transformers | 8,607 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Precision: 0.6138
- Recall: 0.7169
- F1: 0.6613
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 |
| No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 |
| No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 |
| No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 |
| No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12 | 085586cf4b9e6c2bf87b0319d7ce43dcbe75a066 | 2022-03-01T14:22:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12 | 19 | null | transformers | 8,608 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1290
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0733 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 2.0 | 30 | 0.0732 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 3.0 | 45 | 0.0731 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 4.0 | 60 | 0.0716 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 5.0 | 75 | 0.0635 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12 | 1f213dc717ec7c1f89c834cfc69e74e115249d60 | 2022-03-01T14:25:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12 | 19 | null | transformers | 8,609 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Precision: 0.3231
- Recall: 0.5151
- F1: 0.3971
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 |
| No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 |
| No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 |
| No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 |
| No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14 | 2bb02fe596c6b533fa3d5f1b609a6363c313f212 | 2022-03-01T14:48:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14 | 19 | null | transformers | 8,610 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6542
- Precision: 0.0092
- Recall: 0.0403
- F1: 0.0150
- Accuracy: 0.7291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.5856 | 0.0012 | 0.0125 | 0.0022 | 0.6950 |
| No log | 2.0 | 20 | 0.5933 | 0.0 | 0.0 | 0.0 | 0.7282 |
| No log | 3.0 | 30 | 0.5729 | 0.0051 | 0.025 | 0.0085 | 0.7155 |
| No log | 4.0 | 40 | 0.6178 | 0.0029 | 0.0125 | 0.0047 | 0.7143 |
| No log | 5.0 | 50 | 0.6707 | 0.0110 | 0.0375 | 0.0170 | 0.7178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47 | 781dd9df4e4f5f7ab9df50cb60f5176f7936e52b | 2022-03-01T14:50:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47 | 19 | null | transformers | 8,611 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1801
- Precision: 0.6153
- Recall: 0.7301
- F1: 0.6678
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 |
| No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 |
| No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 |
| No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 |
| No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21 | fbb89199322aeef09e41d7f54fd344bd9730f06f | 2022-03-01T14:52:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21 | 19 | null | transformers | 8,612 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1059
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1103 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 2.0 | 30 | 0.0842 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 3.0 | 45 | 0.0767 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 4.0 | 60 | 0.0754 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 5.0 | 75 | 0.0735 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ali2066/correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19 | b27ac23d47951eabb2ed2f33b72a6a0912bb7f9f | 2022-03-01T14:55:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19 | 19 | null | transformers | 8,613 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2711
- Precision: 0.3373
- Recall: 0.5670
- F1: 0.4230
- Accuracy: 0.8943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3783 | 0.1833 | 0.3975 | 0.2509 | 0.8413 |
| No log | 2.0 | 60 | 0.3021 | 0.3280 | 0.4820 | 0.3904 | 0.8876 |
| No log | 3.0 | 90 | 0.3196 | 0.3504 | 0.5036 | 0.4133 | 0.8918 |
| No log | 4.0 | 120 | 0.3645 | 0.3434 | 0.5306 | 0.4170 | 0.8759 |
| No log | 5.0 | 150 | 0.4027 | 0.3217 | 0.5486 | 0.4056 | 0.8797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
be4rr/xlm-roberta-base-finetuned-panx-de | 432fa9c8d8a752f3377262ca3782e3cb45d02669 | 2022-03-05T06:37:26.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:xtreme",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | be4rr | null | be4rr/xlm-roberta-base-finetuned-panx-de | 19 | null | transformers | 8,614 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.862669465085938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1374
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2596 | 1.0 | 525 | 0.1571 | 0.8302 |
| 0.1292 | 2.0 | 1050 | 0.1416 | 0.8455 |
| 0.0809 | 3.0 | 1575 | 0.1374 | 0.8627 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
peterhsu/mt5-small-finetuned-amazon-en-zh_TW | e8ac447be45995697e1828b1084d08d591d51f74 | 2022-03-10T07:05:34.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | peterhsu | null | peterhsu/mt5-small-finetuned-amazon-en-zh_TW | 19 | null | transformers | 8,615 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-zh_TW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-zh_TW
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2408
- Rouge1: 15.8831
- Rouge2: 7.1676
- Rougel: 15.5523
- Rougelsum: 15.4954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.5388 | 1.0 | 838 | 3.5888 | 12.6081 | 5.3611 | 12.3495 | 12.2926 |
| 4.0043 | 2.0 | 1676 | 3.4038 | 13.8517 | 6.3417 | 13.4755 | 13.4913 |
| 3.6776 | 3.0 | 2514 | 3.3294 | 15.1519 | 7.3842 | 14.8844 | 14.8458 |
| 3.4929 | 4.0 | 3352 | 3.2668 | 15.6067 | 7.4016 | 15.3715 | 15.2908 |
| 3.387 | 5.0 | 4190 | 3.2855 | 15.0546 | 7.3065 | 14.8271 | 14.7755 |
| 3.302 | 6.0 | 5028 | 3.2457 | 15.0213 | 6.6597 | 14.6131 | 14.5641 |
| 3.2806 | 7.0 | 5866 | 3.2408 | 15.8831 | 7.1676 | 15.5523 | 15.4954 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
mitiku/AmharicCacoPostag | b1cf0a2d0953f500e97e68dead4da9c00b18d6e4 | 2022-03-20T10:11:18.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | mitiku | null | mitiku/AmharicCacoPostag | 19 | null | transformers | 8,616 | ---
tags:
- generated_from_trainer
model-index:
- name: AmharicCacoPostag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicCacoPostag
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ccdv/lsg-distilbert-base-uncased-4096 | 686895e1d6a2c7a7d8972c6b20b71760dd143be5 | 2022-07-25T16:35:33.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"en",
"transformers",
"long context",
"autotrain_compatible"
] | fill-mask | false | ccdv | null | ccdv/lsg-distilbert-base-uncased-4096 | 19 | null | transformers | 8,617 | ---
language: en
tags:
- long context
---
# LSG model
**Transformers >= 4.18.0**\
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
* [Usage](#usage)
* [Parameters](#parameters)
* [Sparse selection type](#sparse-selection-type)
* [Tasks](#tasks)
* [Training global tokens](#training-global-tokens)
This model is adapted from [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) without additional pretraining yet. It uses the same number of parameters/layers and the same tokenizer
This model can handle long sequences but faster and more efficiently than Longformer or BigBird (from Transformers) and relies on Local + Sparse + Global attention (LSG).
The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
Support encoder-decoder and causal masking but I didnt test it extensively.\
Implemented in PyTorch.

## Usage
The model relies on a custom modeling file, you need to add trust_remote_code=True to use it.
```python:
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
```
## Parameters
You can change various parameters like :
* the number of global tokens (num_global_tokens=1)
* local block size (block_size=128)
* sparse block size (sparse_block_size=128)
* sparsity factor (sparsity_factor=2)
* mask_first_token (mask first token since it is redundant with the first global token)
* see config.json file
Default parameters work well in practice. If you are short on memory, reduce block sizes, increase sparsity factor and remove dropout in the attention score matrix.
```python:
from transformers import AutoModel
model = AutoModel.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096",
trust_remote_code=True,
num_global_tokens=16,
block_size=64,
sparse_block_size=64,
attention_probs_dropout_prob=0.0
sparsity_factor=4,
sparsity_type="none",
mask_first_token=True
)
```
## Sparse selection type
There are 5 different sparse selection patterns. The best type is task dependent. \
Note that for sequences with length < 2*block_size, the type has no effect.
* sparsity_type="norm", select highest norm tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="pooling", use average pooling to merge tokens
* Works best for a small sparsity_factor (2 to 4)
* Additional parameters:
* None
* sparsity_type="lsh", use the LSH algorithm to cluster similar tokens
* Works best for a large sparsity_factor (4+)
* LSH relies on random projections, thus inference may differ slightly with different seeds
* Additional parameters:
* lsg_num_pre_rounds=1, pre merge tokens n times before computing centroids
* sparsity_type="stride", use a striding mecanism per head
* Each head will use different tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
* sparsity_type="block_stride", use a striding mecanism per head
* Each head will use block of tokens strided by sparsify_factor
* Not recommended if sparsify_factor > num_heads
## Tasks
Fill mask example:
```python:
from transformers import FillMaskPipeline, AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
SENTENCES = ["Paris is the <mask> of France.", "The goal of life is <mask>."]
pipeline = FillMaskPipeline(model, tokenizer)
output = pipeline(SENTENCES, top_k=1)
output = [o[0]["sequence"] for o in output]
> ['Paris is the capital of France.', 'The goal of life is happiness.']
```
Classification example:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
SENTENCE = "This is a test for sequence classification. " * 300
token_ids = tokenizer(
SENTENCE,
return_tensors="pt",
#pad_to_multiple_of=... # Optional
truncation=True
)
output = model(**token_ids)
> SequenceClassifierOutput(loss=None, logits=tensor([[-0.3051, -0.1762]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
## Training global tokens
To train global tokens and the classification head only:
```python:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096",
trust_remote_code=True,
pool_with_global=True, # pool with a global token instead of first token
num_global_tokens=16
)
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-distilbert-base-uncased-4096")
for name, param in model.named_parameters():
if "global_embeddings" not in name:
param.requires_grad = False
else:
param.required_grad = True
```
|
ukr-models/xlm-roberta-base-uk | 6cf5dfdf6fdcd44c81e22dd98e1c0340898d1558 | 2022-03-12T08:15:16.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"uk",
"transformers",
"ukrainian",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | ukr-models | null | ukr-models/xlm-roberta-base-uk | 19 | 2 | transformers | 8,618 | ---
language:
- uk
tags:
- ukrainian
widget:
- text: "Тарас Шевченко – великий український <mask>."
license: mit
---
This is a smaller version of the [XLM-RoBERTa](https://huggingface.co/xlm-roberta-base) model with only Ukrainian and some English embeddings left.
* The original model has 470M parameters, with 384M of them being input and output embeddings.
* After shrinking the `sentencepiece` vocabulary from 250K to 31K (top 25K Ukrainian tokens and top English tokens) the number of model parameters reduced to 134M parameters, and model size reduced from 1GB to 400MB.
|
nikolamilosevic/distil_bert_uncased-finetuned-relations | f34ec890eacef9d6f1c3f79e20f352e4ccbd9215 | 2022-06-19T13:28:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | nikolamilosevic | null | nikolamilosevic/distil_bert_uncased-finetuned-relations | 19 | null | transformers | 8,619 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distil_bert_uncased-finetuned-relations
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_bert_uncased-finetuned-relations
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4191
- Accuracy: 0.8866
- Prec: 0.8771
- Recall: 0.8866
- F1: 0.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Prec | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|
| 1.1823 | 1.0 | 232 | 0.5940 | 0.8413 | 0.8273 | 0.8413 | 0.8224 |
| 0.4591 | 2.0 | 464 | 0.4600 | 0.8607 | 0.8539 | 0.8607 | 0.8555 |
| 0.3106 | 3.0 | 696 | 0.4160 | 0.8812 | 0.8763 | 0.8812 | 0.8785 |
| 0.246 | 4.0 | 928 | 0.4113 | 0.8834 | 0.8766 | 0.8834 | 0.8796 |
| 0.2013 | 5.0 | 1160 | 0.4191 | 0.8866 | 0.8771 | 0.8866 | 0.8808 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.13.0.dev20220614
- Datasets 2.2.2
- Tokenizers 0.11.6
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_EN | 3728d8cd2666a51d430fba1ebb588eeef086d7a6 | 2022-03-17T14:51:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_EN | 19 | null | transformers | 8,620 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_EN
This model is a fine-tuned version of [StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN](https://huggingface.co/StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN) on the CRAFTone dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2213
- Precision: 0.8528
- Recall: 0.8617
- F1: 0.8572
- Accuracy: 0.9709
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
To improve F1 score the transfer learning was completed in two steps.
Using [StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN](https://huggingface.co/StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_EN) as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0106 | 1.0 | 1360 | 0.1866 | 0.8343 | 0.8661 | 0.8499 | 0.9698 |
| 0.0063 | 2.0 | 2720 | 0.2100 | 0.8536 | 0.8537 | 0.8537 | 0.9701 |
| 0.0031 | 3.0 | 4080 | 0.2133 | 0.8506 | 0.8578 | 0.8542 | 0.9705 |
| 0.0008 | 4.0 | 5440 | 0.2213 | 0.8528 | 0.8617 | 0.8572 | 0.9709 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES | 772c128b62e7913f0d4f9a957c62c02cf1e3c533 | 2022-03-17T14:51:33.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | StivenLancheros | null | StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES | 19 | null | transformers | 8,621 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_AugmentedTransfer_ES
This model is a fine-tuned version of [StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Precision: 0.8535
- Recall: 0.8476
- F1: 0.8505
- Accuracy: 0.9705
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
To improve F1 score the transfer learning was completed in two steps.
Using [StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT ES (MT translated)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0177 | 1.0 | 1360 | 0.2318 | 0.8510 | 0.8275 | 0.8391 | 0.9684 |
| 0.0102 | 2.0 | 2720 | 0.2253 | 0.8322 | 0.8455 | 0.8388 | 0.9683 |
| 0.0039 | 3.0 | 4080 | 0.2193 | 0.8383 | 0.8451 | 0.8416 | 0.9689 |
| 0.002 | 4.0 | 5440 | 0.2298 | 0.8535 | 0.8476 | 0.8505 | 0.9705 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
saattrupdan/job-listing-filtering-model | 5e178873916413a84ed8a750161b789b3026c668 | 2022-03-22T18:21:05.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | saattrupdan | null | saattrupdan/job-listing-filtering-model | 19 | null | transformers | 8,622 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: job-listing-filtering-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# job-listing-filtering-model
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4639 | 1.55 | 50 | 0.4343 |
| 0.407 | 3.12 | 100 | 0.3589 |
| 0.3459 | 4.68 | 150 | 0.3110 |
| 0.2871 | 6.25 | 200 | 0.2604 |
| 0.1966 | 7.8 | 250 | 0.2004 |
| 0.0994 | 9.37 | 300 | 0.1766 |
| 0.0961 | 10.92 | 350 | 0.2007 |
| 0.0954 | 12.49 | 400 | 0.1716 |
| 0.0498 | 14.06 | 450 | 0.1642 |
| 0.0419 | 15.62 | 500 | 0.1811 |
| 0.0232 | 17.18 | 550 | 0.1872 |
| 0.0146 | 18.74 | 600 | 0.1789 |
| 0.0356 | 20.31 | 650 | 0.1984 |
| 0.0325 | 21.86 | 700 | 0.1845 |
| 0.0381 | 23.43 | 750 | 0.1994 |
| 0.0063 | 24.98 | 800 | 0.1992 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
asafaya/hubert-base-turkish | 4fcd01cfe986a1a3817233e3b89ddb77c2ce65e2 | 2022-03-29T19:08:55.000Z | [
"pytorch",
"hubert",
"feature-extraction",
"transformers",
"license:cc-by-nc-4.0"
] | feature-extraction | false | asafaya | null | asafaya/hubert-base-turkish | 19 | null | transformers | 8,623 | ---
license: cc-by-nc-4.0
---
|
nqcccccc/phobert-vlsp-absa-qab | a254bc07bc0b2fd5b810b6d91b240456a4909d5b | 2022-04-02T17:08:50.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | nqcccccc | null | nqcccccc/phobert-vlsp-absa-qab | 19 | null | transformers | 8,624 | Entry not found |
palakagl/distilbert_MultiClass_TextClassification | 0c50f39087b2f9263b3c26d4a63ddb3efeb5fd6e | 2022-04-07T17:12:15.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:palakagl/autotrain-data-PersonalAssitant",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | palakagl | null | palakagl/distilbert_MultiClass_TextClassification | 19 | null | transformers | 8,625 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- palakagl/autotrain-data-PersonalAssitant
co2_eq_emissions: 2.258363491829382
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 717221781
- CO2 Emissions (in grams): 2.258363491829382
## Validation Metrics
- Loss: 0.38660314679145813
- Accuracy: 0.9042081949058693
- Macro F1: 0.9079200295131094
- Micro F1: 0.9042081949058692
- Weighted F1: 0.9052766730963512
- Macro Precision: 0.9116101664087508
- Micro Precision: 0.9042081949058693
- Weighted Precision: 0.9097680514456175
- Macro Recall: 0.9080246002936301
- Micro Recall: 0.9042081949058693
- Weighted Recall: 0.9042081949058693
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/palakagl/autotrain-PersonalAssitant-717221781
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("palakagl/autotrain-PersonalAssitant-717221781", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("palakagl/autotrain-PersonalAssitant-717221781", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
BigSalmon/GPT2Neo1.3BPoints2 | 5d3d912d44abc0be66d3ab8e7fb8a731a7178d48 | 2022-04-12T19:20:21.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPT2Neo1.3BPoints2 | 19 | null | transformers | 8,626 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPT2Neo1.3BPoints2")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPT2Neo1.3BPoints2")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence. |
Wogiger/roberta-food | 0614d95efac62c8f5b818d6e3ea4ee8ee4b3fa69 | 2022-04-15T08:46:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Wogiger | null | Wogiger/roberta-food | 19 | null | transformers | 8,627 | Entry not found |
Helsinki-NLP/opus-mt-tc-big-lv-en | 6a72922634efb08f24d49149300122ef84e313e6 | 2022-06-01T13:00:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"lv",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-lv-en | 19 | null | transformers | 8,628 | ---
language:
- en
- lv
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-lv-en
results:
- task:
name: Translation lav-eng
type: translation
args: lav-eng
dataset:
name: flores101-devtest
type: flores_101
args: lav eng devtest
metrics:
- name: BLEU
type: bleu
value: 37.2
- task:
name: Translation lav-eng
type: translation
args: lav-eng
dataset:
name: newsdev2017
type: newsdev2017
args: lav-eng
metrics:
- name: BLEU
type: bleu
value: 30.8
- task:
name: Translation lav-eng
type: translation
args: lav-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: lav-eng
metrics:
- name: BLEU
type: bleu
value: 59.2
- task:
name: Translation lav-eng
type: translation
args: lav-eng
dataset:
name: newstest2017
type: wmt-2017-news
args: lav-eng
metrics:
- name: BLEU
type: bleu
value: 21.8
---
# opus-mt-tc-big-lv-en
Neural machine translation model for translating from Latvian (lv) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): lav
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-eng/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT lav-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Dienai ir divdesmit četras stundas.",
"Jys lobs advokats."
]
model_name = "pytorch-models/opus-mt-tc-big-lv-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# The day has twenty-four hours.
# Jys lobs lawyer.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-lv-en")
print(pipe("Dienai ir divdesmit četras stundas."))
# expected output: The day has twenty-four hours.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-eng/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-eng/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| lav-eng | tatoeba-test-v2021-08-07 | 0.73884 | 59.2 | 1631 | 11213 |
| lav-eng | flores101-devtest | 0.64246 | 37.2 | 1012 | 24721 |
| lav-eng | newsdev2017 | 0.55467 | 30.8 | 2003 | 48175 |
| lav-eng | newstest2017 | 0.48769 | 21.8 | 2001 | 47511 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 19:46:50 EEST 2022
* port machine: LM0-400-22516.local
|
schhwmn/mbart-large-50-finetuned-ukr-gec | 799924f48e2aaa2f2609e8181e8b64b3dd01f85d | 2022-04-21T11:33:45.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"uk",
"arxiv:2103.16997",
"transformers",
"gec",
"mbart-50",
"autotrain_compatible"
] | text2text-generation | false | schhwmn | null | schhwmn/mbart-large-50-finetuned-ukr-gec | 19 | null | transformers | 8,629 | ---
language: uk
tags:
- gec
- mbart-50
widget:
- text: "я й не думав що комп'ютерна лінгвістика це легкоо."
---
This model was finetuned on errorful sentences from the `train` subset of [UA-GEC](https://github.com/grammarly/ua-gec) corpus, introduced in [UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language](https://arxiv.org/abs/2103.16997) paper.
Only sentences containing errors were used; 8,874 sentences for training and 987 sentences for validation. The training arguments were defined as follows:
```
batch_size = 4
num_train_epochs = 3
learning_rate=5e-5
weight_decay=0.01
optim = "adamw_hf"
``` |
eslamxm/AraT5-base-title-generation-finetuned-ar-wikilingua | 9137f2a54721e97c0aa5dbb241d5fcf7ef7407d3 | 2022-04-20T04:35:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/AraT5-base-title-generation-finetuned-ar-wikilingua | 19 | null | transformers | 8,630 | ---
tags:
- summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: AraT5-base-title-generation-finetuned-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraT5-base-title-generation-finetuned-ar-xlsum
This model is a fine-tuned version of [UBC-NLP/AraT5-base-title-generation](https://huggingface.co/UBC-NLP/AraT5-base-title-generation) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8120
- Rouge-1: 23.29
- Rouge-2: 8.44
- Rouge-l: 20.74
- Gen Len: 18.16
- Bertscore: 70.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 6.1002 | 1.0 | 5111 | 5.2917 | 18.95 | 5.84 | 17.01 | 17.9 | 68.69 |
| 5.4427 | 2.0 | 10222 | 5.0877 | 20.61 | 6.73 | 18.58 | 17.14 | 69.69 |
| 5.1876 | 3.0 | 15333 | 4.9631 | 21.27 | 7.17 | 19.09 | 17.69 | 69.82 |
| 5.0256 | 4.0 | 20444 | 4.8984 | 21.7 | 7.53 | 19.55 | 17.56 | 70.18 |
| 4.9104 | 5.0 | 25555 | 4.8538 | 22.23 | 7.54 | 19.79 | 17.6 | 70.33 |
| 4.8251 | 6.0 | 30666 | 4.8309 | 22.35 | 7.6 | 19.96 | 17.64 | 70.51 |
| 4.7666 | 7.0 | 35777 | 4.8168 | 22.45 | 7.81 | 20.15 | 17.47 | 70.61 |
| 4.7275 | 8.0 | 40888 | 4.8120 | 22.67 | 7.83 | 20.34 | 17.56 | 70.66 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
vivianhuang88/bert_twitter_hashtag | 41881997709e9403b6b73dd4cedced5fdeaf05e4 | 2022-04-19T06:13:59.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | vivianhuang88 | null | vivianhuang88/bert_twitter_hashtag | 19 | null | transformers | 8,631 | ---
license: afl-3.0
---
# Overview
This model is based on [bert-base-uncased](https://huggingface.co/bert-base-uncased) model and trained on more than 30k tweets that scraped from Twitter. By inputing some sentences with a '[MASK]' indicating the location you would like to fill in with a hashtag, our model can generate potential related trending topics according to your tweet context.
# Define a list of trending topics
```python
trending_topics = [Your choice of topics]
```
# Download the model
```python
from transformers import pipeline, BertTokenizer
import numpy as np
MODEL = "vivianhuang88/bert_twitter_hashtag"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = BertTokenizer.from_pretrained(MODEL, additional_special_tokens=trending_topics)
```
# Get the output
```python
def print_candidates(text, candidates):
for i in range(5):
token = tokenizer.decode(candidates[i]['token'])
topic = ''.join(token.split())
output = text.replace("[MASK]", topic)
print(output)
text = "Bruce has an electric guitar set in [MASK]. "
candidates = fill_mask(text, targets = trending_topics)
print_candidates(text, candidates)
``` |
emilylearning/finetuned_cgp_added_birth_place__female_weight_1.5__test_run_False__p_dataset_100 | 2e1240eebf9911c7ebdb49cccf2d9270d0c4332e | 2022-04-21T22:13:54.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_added_birth_place__female_weight_1.5__test_run_False__p_dataset_100 | 19 | null | transformers | 8,632 | Entry not found |
emilylearning/finetuned_cgp_add_birth_date__f_weight_5__p_dataset_100__test_False | 463a352f13b612bf36d961d093e1a493c7d6ad92 | 2022-04-25T08:11:31.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_add_birth_date__f_weight_5__p_dataset_100__test_False | 19 | null | transformers | 8,633 | Entry not found |
emilylearning/finetuned_cgp_add_birth_place__f_weight_5__p_dataset_100__test_False | beac4c8e8f8d5f5517bc6e75e0037347c84bf89f | 2022-04-24T21:06:12.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_add_birth_place__f_weight_5__p_dataset_100__test_False | 19 | null | transformers | 8,634 | Entry not found |
Lucifermorningstar011/autotrain-final-784824206 | 97489bd4366239eba7bc9f9b4c3b5222e4e4029f | 2022-04-25T18:46:51.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:Lucifermorningstar011/autotrain-data-final",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Lucifermorningstar011 | null | Lucifermorningstar011/autotrain-final-784824206 | 19 | null | transformers | 8,635 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Lucifermorningstar011/autotrain-data-final
co2_eq_emissions: 354.21745907505175
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 784824206
- CO2 Emissions (in grams): 354.21745907505175
## Validation Metrics
- Loss: 0.1393078863620758
- Accuracy: 0.9785765909606228
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Lucifermorningstar011/autotrain-final-784824206
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Lucifermorningstar011/autotrain-final-784824206", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Lucifermorningstar011/autotrain-final-784824206", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Lucifermorningstar011/autotrain-final-784824213 | 327013fea91dfbde55328555aa776401beb7ecfb | 2022-04-25T19:24:43.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:Lucifermorningstar011/autotrain-data-final",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | Lucifermorningstar011 | null | Lucifermorningstar011/autotrain-final-784824213 | 19 | null | transformers | 8,636 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Lucifermorningstar011/autotrain-data-final
co2_eq_emissions: 443.62532415086787
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 784824213
- CO2 Emissions (in grams): 443.62532415086787
## Validation Metrics
- Loss: 0.12777526676654816
- Accuracy: 0.9823625038850627
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Lucifermorningstar011/autotrain-final-784824213
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("Lucifermorningstar011/autotrain-final-784824213", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Lucifermorningstar011/autotrain-final-784824213", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
yihsuan/best_model_0426_base | e16b7d1bbeaa94f2dedbda63a2da20efd7b11dfc | 2022-04-28T01:44:27.000Z | [
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"zh",
"transformers",
"summarization",
"mT5",
"autotrain_compatible"
] | summarization | false | yihsuan | null | yihsuan/best_model_0426_base | 19 | null | transformers | 8,637 | ---
tags:
- summarization
- mT5
language:
- zh
widget:
- text: "專家稱維康桑格研究所(Wellcome Sanger Institute)的上述研究發現「令人震驚」而且「發人深省」。基因變異指關於我們身體成長和管理的相關指令,也就是DNA當中發生的變化。長期以來,變異一直被當作癌症的根源,但是數十年來關於變異是否對衰老有重要影響一直存在爭論。桑格研究所的研究人員說他們得到了「第一個試驗性證據」,證明了兩者的關係。他們分析了預期壽命各異的物種基因變異的不同速度。研究人員分析了貓、黑白疣猴、狗、雪貂、長頸鹿、馬、人、獅子、裸鼴鼠、兔子、老鼠、環尾狐猴和老虎等十幾種動物的DNA。發表在《自然》雜誌上的研究顯示,老鼠在短暫的生命當中每年經歷了將近800次變異,老鼠的壽命一般不到4年。"
inference:
parameters:
max_length: 50
--- |
Lilya/distilbert-base-uncased-finetuned-ner-TRANS | 587367aa304f568002e426e2162f72485244aa86 | 2022-04-28T07:00:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Lilya | null | Lilya/distilbert-base-uncased-finetuned-ner-TRANS | 19 | null | transformers | 8,638 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner-TRANS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner-TRANS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1053
- Precision: 0.7911
- Recall: 0.8114
- F1: 0.8011
- Accuracy: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.077 | 1.0 | 3762 | 0.0724 | 0.7096 | 0.7472 | 0.7279 | 0.9741 |
| 0.0538 | 2.0 | 7524 | 0.0652 | 0.7308 | 0.7687 | 0.7493 | 0.9766 |
| 0.0412 | 3.0 | 11286 | 0.0643 | 0.7672 | 0.7875 | 0.7772 | 0.9788 |
| 0.0315 | 4.0 | 15048 | 0.0735 | 0.7646 | 0.7966 | 0.7803 | 0.9793 |
| 0.0249 | 5.0 | 18810 | 0.0772 | 0.7805 | 0.7981 | 0.7892 | 0.9801 |
| 0.0213 | 6.0 | 22572 | 0.0783 | 0.7829 | 0.8063 | 0.7944 | 0.9805 |
| 0.0187 | 7.0 | 26334 | 0.0858 | 0.7821 | 0.8010 | 0.7914 | 0.9809 |
| 0.0157 | 8.0 | 30096 | 0.0860 | 0.7837 | 0.8120 | 0.7976 | 0.9812 |
| 0.0122 | 9.0 | 33858 | 0.0963 | 0.7857 | 0.8129 | 0.7990 | 0.9813 |
| 0.0107 | 10.0 | 37620 | 0.0993 | 0.7934 | 0.8089 | 0.8010 | 0.9812 |
| 0.0091 | 11.0 | 41382 | 0.1031 | 0.7882 | 0.8123 | 0.8001 | 0.9814 |
| 0.0083 | 12.0 | 45144 | 0.1053 | 0.7911 | 0.8114 | 0.8011 | 0.9815 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
cassiepowell/msmarco-RoBERTa-for-similarity | 831764faebb3d2b211d3e3cc65e1114f6f336faa | 2022-04-28T17:46:46.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cassiepowell | null | cassiepowell/msmarco-RoBERTa-for-similarity | 19 | null | transformers | 8,639 | Entry not found |
HiTZ/A2T_RoBERTa_SMFA_ACE-arg | 6cb7fc2c69aea176c58e379c6ffdf8d63b1e61e4 | 2022-05-08T23:09:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:snli",
"dataset:anli",
"dataset:multi_nli",
"dataset:multi_nli_mismatch",
"dataset:fever",
"arxiv:2104.14690",
"arxiv:2203.13602",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | HiTZ | null | HiTZ/A2T_RoBERTa_SMFA_ACE-arg | 19 | null | transformers | 8,640 | ---
pipeline_tag: zero-shot-classification
datasets:
- snli
- anli
- multi_nli
- multi_nli_mismatch
- fever
---
# A2T Entailment model
**Important:** These pretrained entailment models are intended to be used with the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library but are also fully compatible with the `ZeroShotTextClassificationPipeline` from [Transformers](https://github.com/huggingface/Transformers).
Textual Entailment (or Natural Language Inference) has turned out to be a good choice for zero-shot text classification problems [(Yin et al., 2019](https://aclanthology.org/D19-1404/); [Wang et al., 2021](https://arxiv.org/abs/2104.14690); [Sainz and Rigau, 2021)](https://aclanthology.org/2021.gwc-1.6/). Recent research addressed Information Extraction problems with the same idea [(Lyu et al., 2021](https://aclanthology.org/2021.acl-short.42/); [Sainz et al., 2021](https://aclanthology.org/2021.emnlp-main.92/); [Sainz et al., 2022a](), [Sainz et al., 2022b)](https://arxiv.org/abs/2203.13602). The A2T entailment models are first trained with NLI datasets such as MNLI [(Williams et al., 2018)](), SNLI [(Bowman et al., 2015)]() or/and ANLI [(Nie et al., 2020)]() and then fine-tuned to specific tasks that were previously converted to textual entailment format.
For more information please, take a look to the [Ask2Transformers](https://github.com/osainz59/Ask2Transformers) library or the following published papers:
- [Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction (Sainz et al., EMNLP 2021)](https://aclanthology.org/2021.emnlp-main.92/)
- [Textual Entailment for Event Argument Extraction: Zero- and Few-Shot with Multi-Source Learning (Sainz et al., Findings of NAACL-HLT 2022)]()
## About the model
The model name describes the configuration used for training as follows:
<!-- $$\text{HiTZ/A2T\_[pretrained\_model]\_[NLI\_datasets]\_[finetune\_datasets]}$$ -->
<h3 align="center">HiTZ/A2T_[pretrained_model]_[NLI_datasets]_[finetune_datasets]</h3>
- `pretrained_model`: The checkpoint used for initialization. For example: RoBERTa<sub>large</sub>.
- `NLI_datasets`: The NLI datasets used for pivot training.
- `S`: Standford Natural Language Inference (SNLI) dataset.
- `M`: Multi Natural Language Inference (MNLI) dataset.
- `F`: Fever-nli dataset.
- `A`: Adversarial Natural Language Inference (ANLI) dataset.
- `finetune_datasets`: The datasets used for fine tuning the entailment model. Note that for more than 1 dataset the training was performed sequentially. For example: ACE-arg.
Some models like `HiTZ/A2T_RoBERTa_SMFA_ACE-arg` have been trained marking some information between square brackets (`'[['` and `']]'`) like the event trigger span. Make sure you follow the same preprocessing in order to obtain the best results.
## Cite
If you use this model, consider citing the following publications:
```bibtex
@inproceedings{sainz-etal-2021-label,
title = "Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction",
author = "Sainz, Oscar and
Lopez de Lacalle, Oier and
Labaka, Gorka and
Barrena, Ander and
Agirre, Eneko",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.92",
doi = "10.18653/v1/2021.emnlp-main.92",
pages = "1199--1212",
}
``` |
Philip-Jan/finetuning-sentiment-model-3000-samples | 966415e9322c814a22dd40b18e59f855459d4455 | 2022-07-13T20:44:10.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | Philip-Jan | null | Philip-Jan/finetuning-sentiment-model-3000-samples | 19 | null | transformers | 8,641 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8646864686468646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3328
- Accuracy: 0.8633
- F1: 0.8647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jerryKakooza/language-detection-fine-tuned-on-xlm-roberta-base | b4fa3b7f65a428f5f94f62640ad8bb391deae434 | 2022-05-03T09:31:18.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:common_language",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | jerryKakooza | null | jerryKakooza/language-detection-fine-tuned-on-xlm-roberta-base | 19 | null | transformers | 8,642 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: language-detection-fine-tuned-on-xlm-roberta-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: common_language
type: common_language
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.9760187824920342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# language-detection-fine-tuned-on-xlm-roberta-base
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1642
- Accuracy: 0.9760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0725 | 1.0 | 22194 | 0.1642 | 0.9760 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
emilylearning/finetuned_cgp_added_birth_date__test_run_False__p_dataset_100 | fcf11285a2f0ea8472f731dcff339d127c5231a2 | 2022-05-06T07:27:24.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/finetuned_cgp_added_birth_date__test_run_False__p_dataset_100 | 19 | null | transformers | 8,643 | Entry not found |
theojolliffe/bart-large-cnn-finetuned-pubmed | 9ed6b95cc8ade42a8398d70dba8f629e0fc9ca09 | 2022-05-07T10:50:06.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | theojolliffe | null | theojolliffe/bart-large-cnn-finetuned-pubmed | 19 | null | transformers | 8,644 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-large-cnn-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: pubmed
metrics:
- name: Rouge1
type: rouge
value: 36.3093
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0113
- Rouge1: 36.3093
- Rouge2: 14.7358
- Rougel: 22.2752
- Rougelsum: 32.8168
- Gen Len: 137.6193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.1664 | 1.0 | 3748 | 2.0113 | 36.3093 | 14.7358 | 22.2752 | 32.8168 | 137.6193 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
RobertoMCA97/bert-finetuned-inspec | 37995bb66779d7b7929fcd7c4b21ec7baf3ad63e | 2022-05-07T16:35:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:inspec",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | RobertoMCA97 | null | RobertoMCA97/bert-finetuned-inspec | 19 | null | transformers | 8,645 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- inspec
metrics:
- f1
model-index:
- name: bert-finetuned-inspec
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: inspec
type: inspec
args: extraction
metrics:
- name: F1
type: f1
value: 0.30353331752430635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-inspec
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the inspec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3055
- F1: 0.3035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3323 | 1.0 | 125 | 0.2799 | 0.1521 |
| 0.2563 | 2.0 | 250 | 0.2638 | 0.2230 |
| 0.2179 | 3.0 | 375 | 0.2689 | 0.2607 |
| 0.1809 | 4.0 | 500 | 0.2807 | 0.3122 |
| 0.1545 | 5.0 | 625 | 0.3055 | 0.3035 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pile-of-law/distilbert-base-uncased-finetuned-eoir_privacy | f85c0f418a24db8a0146b739688cc7824b2b29c8 | 2022-07-04T07:27:19.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:eoir_privacy",
"arxiv:2207.00220",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | pile-of-law | null | pile-of-law/distilbert-base-uncased-finetuned-eoir_privacy | 19 | 2 | transformers | 8,646 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eoir_privacy
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-eoir_privacy
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: eoir_privacy
type: eoir_privacy
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.9052835051546392
- name: F1
type: f1
value: 0.8088426527958388
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-eoir_privacy
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the eoir_privacy dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3681
- Accuracy: 0.9053
- F1: 0.8088
## Model description
Model predicts whether to mask names as pseudonyms in any text. Input format should be a paragraph with names masked. It will then output whether to use a pseudonym because the EOIR courts would not allow such private/sensitive information to become public unmasked.
## Intended uses & limitations
This is a minimal privacy standard and will likely not work on out-of-distribution data.
## Training and evaluation data
We train on the EOIR Privacy dataset and evaluate further using sensitivity analyses.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 395 | 0.3053 | 0.8789 | 0.7432 |
| 0.3562 | 2.0 | 790 | 0.2857 | 0.8976 | 0.7883 |
| 0.2217 | 3.0 | 1185 | 0.3358 | 0.8905 | 0.7550 |
| 0.1509 | 4.0 | 1580 | 0.3505 | 0.9040 | 0.8077 |
| 0.1509 | 5.0 | 1975 | 0.3681 | 0.9053 | 0.8088 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
### Citation
```
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson*, Peter and Krass*, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
```
|
eslamxm/mt5-base-arabic | 16fa82f4facb900388ce38930cf304ced4c6702c | 2022-06-14T18:08:07.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"arabic",
"ar",
"Abstractive Summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | eslamxm | null | eslamxm/mt5-base-arabic | 19 | null | transformers | 8,647 | ---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-arabic
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on arabic subset on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2742
- Rouge-1: 22.86
- Rouge-2: 10.31
- Rouge-l: 20.85
- Gen Len: 19.0
- Bertscore: 71.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.2331 | 1.0 | 1172 | 3.5051 | 18.54 | 6.63 | 16.77 | 19.0 | 70.28 |
| 3.7075 | 2.0 | 2344 | 3.3737 | 19.99 | 7.94 | 18.19 | 19.0 | 70.79 |
| 3.5132 | 3.0 | 3516 | 3.3171 | 20.76 | 8.57 | 18.96 | 19.0 | 70.95 |
| 3.3859 | 4.0 | 4688 | 3.2811 | 21.49 | 8.99 | 19.51 | 19.0 | 71.19 |
| 3.3012 | 5.0 | 5860 | 3.2742 | 21.79 | 9.18 | 19.77 | 19.0 | 71.25 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Jiexing/spider_relation_t5_3b-4160 | 00eec8beeed64edaf1563426c58303017c3c7859 | 2022-05-09T16:42:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Jiexing | null | Jiexing/spider_relation_t5_3b-4160 | 19 | null | transformers | 8,648 | Entry not found |
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_42 | 11c79f55bdfb5b351a2e150d710df7742877a1b8 | 2022-05-10T23:37:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CEBaB | null | CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_42 | 19 | null | transformers | 8,649 | Entry not found |
Xiaoman/NER-CoNLL2003-V3 | 1c6a91de55dd52137ba787de590778fdab365217 | 2022-05-14T18:42:17.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Xiaoman | null | Xiaoman/NER-CoNLL2003-V3 | 19 | null | transformers | 8,650 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: NER-CoNLL2003-V3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-CoNLL2003-V3
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.961395091713594e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 27
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
paust/pko-t5-small | c89bdaf7e9b562fec1b73bffb6d83c608d24657b | 2022-05-21T06:38:51.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ko",
"arxiv:2105.09680",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | paust | null | paust/pko-t5-small | 19 | 1 | transformers | 8,651 | ---
language: ko
license: cc-by-4.0
---
# pko-t5-small
[Source Code](https://github.com/paust-team/pko-t5)
pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다.
한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다.
pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다.
## Usage
transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다.
### Example
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-small')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-small')
input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids
labels = tokenizer(["T5 입니다."]).input_ids
outputs = model(input_ids, labels)
print(f"loss={outputs.loss} logits={outputs.logits}")
```
## Klue 평가 (dev)
| | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) |
| --- | --- |-----------------| --- | --- | --- | --- | --- | --- |
| | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | 75.26/- |
| FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 62.95 | 93.15 | 43.81/46.58 |
| FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 72.94 | 97.28 | 61.53/64.74 |
| FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | 72.26 | 97.60 | 68.01/71.44 |
| MT | pko-t5-small | 85.85 | 79.12/77.81 | 66.8 | 81.53 | 67.93 | 91.38 | 44.97/48.07 |
| MT | pko-t5-base | 86.86 | 87.61/81.42 | 75.46 | 86.85 | 71.85 | 96.32 | 61.95/65.06 |
| MT | pko-t5-large | 87.25 | 91.05/84.58 | 82.16 | 87.63 | **74.78** | **97.33** | **69.18/71.92** |
- FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝
- [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수
## License
PAUST에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다. |
RobertoMCA97/bert-finetuned-inspec-3-epochs | 2023362bda4b17d9e2bdce2f984c51c36d79f1d7 | 2022-05-17T17:27:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:inspec",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | RobertoMCA97 | null | RobertoMCA97/bert-finetuned-inspec-3-epochs | 19 | null | transformers | 8,652 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- inspec
metrics:
- f1
- precision
- recall
model-index:
- name: bert-finetuned-inspec-3-epochs
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: inspec
type: inspec
args: extraction
metrics:
- name: F1
type: f1
value: 0.28328008519701814
- name: Precision
type: precision
value: 0.26594090202177295
- name: Recall
type: recall
value: 0.3030379746835443
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-inspec-3-epochs
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the inspec dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2728
- F1: 0.2833
- Precision: 0.2659
- Recall: 0.3030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|
| 0.3338 | 1.0 | 125 | 0.2837 | 0.1401 | 0.1510 | 0.1306 |
| 0.2575 | 2.0 | 250 | 0.2658 | 0.2183 | 0.2519 | 0.1927 |
| 0.2259 | 3.0 | 375 | 0.2728 | 0.2833 | 0.2659 | 0.3030 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ibm/qcpg-questions | 4b7b17212dbc6fbc5af2189184bc42d15efb5d47 | 2022-05-18T11:03:01.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | ibm | null | ibm/qcpg-questions | 19 | null | transformers | 8,653 | Details can be found [here](https://github.com/IBM/quality-controlled-paraphrase-generation) |
priyamm/autotrain-KeywordExtraction-882328335 | 4538e66824ddbb94c4486ee0b4041b4d273690e7 | 2022-05-18T20:40:08.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:priyamm/autotrain-data-KeywordExtraction",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | priyamm | null | priyamm/autotrain-KeywordExtraction-882328335 | 19 | null | transformers | 8,654 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- priyamm/autotrain-data-KeywordExtraction
co2_eq_emissions: 0.21373468108000182
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 882328335
- CO2 Emissions (in grams): 0.21373468108000182
## Validation Metrics
- Loss: 0.2641160488128662
- Accuracy: 0.9128
- Precision: 0.9444444444444444
- Recall: 0.8772
- AUC: 0.9709556000000001
- F1: 0.9095810866860223
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/priyamm/autotrain-KeywordExtraction-882328335
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("priyamm/autotrain-KeywordExtraction-882328335", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("priyamm/autotrain-KeywordExtraction-882328335", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
d4niel92/t5-reddit | b819bdcaaab7056584c7249005177f003484b205 | 2022-07-03T07:48:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | d4niel92 | null | d4niel92/t5-reddit | 19 | null | transformers | 8,655 | This T5 small model finetuned on Reddit data.
It has two subtasks:
title generation
tag classification |
emilylearning/cond_ft_birth_date_on_wiki_bio__prcnt_na__test_run_True | 3548ef90be69dd06cbff4864879435270a2c0a56 | 2022-05-25T02:55:08.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | emilylearning | null | emilylearning/cond_ft_birth_date_on_wiki_bio__prcnt_na__test_run_True | 19 | null | transformers | 8,656 | Entry not found |
SalamaThanks/SalamaThanksTransformer_en2fil_v3 | a7543a82b9a251db339ca0c27e97d35043817ad3 | 2022-06-06T11:19:24.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SalamaThanks | null | SalamaThanks/SalamaThanksTransformer_en2fil_v3 | 19 | null | transformers | 8,657 | Entry not found |
plncmm/roberta-clinical-wl-es | c0ddaa11193d5ed9d6b454d43dab8f4cd2092828 | 2022-06-07T23:00:56.000Z | [
"pytorch",
"roberta",
"fill-mask",
"es",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | plncmm | null | plncmm/roberta-clinical-wl-es | 19 | null | transformers | 8,658 | ---
license: apache-2.0
language:
- es
widget:
- text: "Periodontitis <mask> generalizada severa."
- text: "Caries dentinaria <mask>."
- text: "Movilidad aumentada en pza <mask>."
- text: "Pcte con dm en tto con <mask>."
- text: "Pcte con erc en tto con <mask>."
tags:
- generated_from_trainer
model-index:
- name: roberta-clinical-wl-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plncmm/roberta-clinical-wl-es
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the Chilean waiting list dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
samba/samba-large-bert-fine-tuned | ae44968f00c6e36388aca3825078bf70c3f6299a | 2022-06-13T02:18:56.000Z | [
"pytorch",
"roberta",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | samba | null | samba/samba-large-bert-fine-tuned | 19 | null | transformers | 8,659 | ---
license: apache-2.0
---
|
Aneela/bert-finetuned-ner | f10642157d77675160047c97eb7527563108236f | 2022-06-19T13:57:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Aneela | null | Aneela/bert-finetuned-ner | 19 | null | transformers | 8,660 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9355265333112911
- name: Recall
type: recall
value: 0.9523729384045776
- name: F1
type: f1
value: 0.9438745725961137
- name: Accuracy
type: accuracy
value: 0.986210042974039
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0627
- Precision: 0.9355
- Recall: 0.9524
- F1: 0.9439
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0842 | 1.0 | 1756 | 0.0662 | 0.9195 | 0.9396 | 0.9294 | 0.9839 |
| 0.0384 | 2.0 | 3512 | 0.0581 | 0.9340 | 0.9504 | 0.9421 | 0.9862 |
| 0.0182 | 3.0 | 5268 | 0.0627 | 0.9355 | 0.9524 | 0.9439 | 0.9862 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
chkla/parlbert-topic-german | 6db3aeb6bb3122f8e117f266ea6c6bfe7be3a44d | 2022-06-20T09:45:18.000Z | [
"pytorch",
"bert",
"text-classification",
"german",
"transformers"
] | text-classification | false | chkla | null | chkla/parlbert-topic-german | 19 | null | transformers | 8,661 | ---
language: german
---
### Welcome to ParlBERT-Topic-German!
🏷 **Model description**
This model was trained on \~10k manually annotated interpellations (📚 [Breunig/ Schnatterer 2019](https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198835332.001.0001/oso-9780198835332)) with topics from the [Comparative Agendas Project](https://www.comparativeagendas.net/datasets_codebooks) to classify text into one of twenty labels (annotation codebook).
_Note: "Interpellation is a formal request of a parliament to the respective government."([Wikipedia](https://en.wikipedia.org/wiki/Interpellation_(politics)))_
🗃 **Dataset**
| party | speeches | tokens |
|----|----|----|
| CDU/CSU | 7,635 | 4,862,654 |
| SPD | 5,321 | 3,158,315 |
| AfD | 3,465 | 1,844,707 |
| FDP | 3,067 | 1,593,108 |
| The Greens | 2,866 | 1,522,305 |
| The Left | 2,671 | 1,394,089 |
| cross-bencher | 200 | 86,170 |
🏃🏼♂️**Model training**
**ParlBERT-Topic-German** was fine-tuned on a domain adapted model (GermanBERT fine-tuned on [DeuParl](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/2889?show=full)) for topic modeling with an interpellations dataset (📚 [Breunig/ Schnatterer 2019](https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198835332.001.0001/oso-9780198835332)) from the [Comparative Agendas Project](https://www.comparativeagendas.net/datasets_codebooks).
🤖 **Use**
```python
from transformers import pipeline
pipeline_classification_topics = pipeline("text-classification", model="chkla/parlbert-topics-german", tokenizer="bert-base-german-cased", return_all_scores=False)
text = "Sachgebiet Ausschließliche Gesetzgebungskompetenz des Bundes über die Zusammenarbeit des Bundes und der Länder zum Schutze der freiheitlichen demokratischen Grundordnung, des Bestandes und der Sicherheit des Bundes oder eines Landes Wir fragen die Bundesregierung"
pipeline_classification_topics(text) # Government
```
📊 **Evaluation**
The model was evaluated on an evaluation set (20%):
| Label | F1 | support |
|----|----|----|
| International | 80.0 | 1,126 |
| Defense | 85.0 | 1,099 |
| Government | 71.3 | 989 |
| Civil Rights | 76.5 | 978 |
| Environment | 76.6 | 845 |
| Transportation | 86.0 | 800 |
| Law & Crime | 67.1 | 492 |
| Energy | 78.6 | 424 |
| Health | 78.2 | 418 |
| Domestic Com. | 64.4 | 382 |
| Immigration | 81.0 | 376 |
| Labor | 69.1 | 344 |
| Macroeconom. | 62.8 | 339 |
| Agriculture | 76.3 | 292 |
| Social Welfare | 49.2 | 253 |
| Technology | 63.0 | 252 |
| Education | 71.6 | 183 |
| Housing | 79.6 | 178 |
| Foreign Trade | 61.5 | 139 |
| Culture | 54.6 | 69 |
| Public Lands | 45.4 | 55 |
⚠️ **Limitations**
Models are often highly topic dependent. Therefore, the model may perform less well on different topics and text types not included in the training set.
👥 **Cite**
```
@article{klamm2022frameast,
title={FrameASt: A Framework for Second-level Agenda Setting in Parliamentary Debates through the Lense of Comparative Agenda Topics},
author={Klamm, Christopher and Rehbein, Ines and Ponzetto, Simone},
journal={ParlaCLARIN III at LREC2022},
year={2022}
}
```
🐦 Twitter: [@chklamm](http://twitter.com/chklamm) |
bousejin/distilbert-base-uncased-finetuned-emotion | 84d53d10aace685a7571d97cbed6b7e34f964801 | 2022-07-13T12:53:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | bousejin | null | bousejin/distilbert-base-uncased-finetuned-emotion | 19 | null | transformers | 8,662 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.925169929474641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8419 | 1.0 | 250 | 0.3236 | 0.9025 | 0.8999 |
| 0.258 | 2.0 | 500 | 0.2202 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
abhishek/autotrain_fashion_mnist_vit_base | 7aa1c5cbc6c320e84d15026792d600ed28dd23ac | 2022-06-23T13:48:56.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:abhishek/autotrain-data-vision_877913e77fb94b7abd4dafc5ebf830b0",
"dataset:fashion_mnist",
"transformers",
"autotrain",
"model-index",
"co2_eq_emissions"
] | image-classification | false | abhishek | null | abhishek/autotrain_fashion_mnist_vit_base | 19 | null | transformers | 8,663 | ---
tags: autotrain
datasets:
- abhishek/autotrain-data-vision_877913e77fb94b7abd4dafc5ebf830b0
- fashion_mnist
co2_eq_emissions: 0.2438639401641305
model-index:
- name: autotrain_fashion_mnist_vit_base
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: fashion_mnist
type: fashion_mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9473
- task:
type: image-classification
name: Image Classification
dataset:
name: fashion_mnist
type: fashion_mnist
config: fashion_mnist
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9431
verified: true
- name: Precision Macro
type: precision
value: 0.9435374485262068
verified: true
- name: Precision Micro
type: precision
value: 0.9431
verified: true
- name: Precision Weighted
type: precision
value: 0.9435374485262069
verified: true
- name: Recall Macro
type: recall
value: 0.9430999999999999
verified: true
- name: Recall Micro
type: recall
value: 0.9431
verified: true
- name: Recall Weighted
type: recall
value: 0.9431
verified: true
- name: F1 Macro
type: f1
value: 0.9431357840300738
verified: true
- name: F1 Micro
type: f1
value: 0.9431
verified: true
- name: F1 Weighted
type: f1
value: 0.9431357840300738
verified: true
- name: loss
type: loss
value: 0.17352284491062164
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 7024732
- CO2 Emissions (in grams): 0.2438639401641305
## Validation Metrics
- Loss: 0.16775867342948914
- Accuracy: 0.9473333333333334
- Macro F1: 0.9473921270228505
- Micro F1: 0.9473333333333334
- Weighted F1: 0.9473921270228505
- Macro Precision: 0.9478705813419325
- Micro Precision: 0.9473333333333334
- Weighted Precision: 0.9478705813419323
- Macro Recall: 0.9473333333333332
- Micro Recall: 0.9473333333333334
- Weighted Recall: 0.9473333333333334 |
mgfrantz/deberta_v3_finetuned_predicting_effective_arguments | 18cc8c463d8cdb69c925bd574c2c31f80b66fdb9 | 2022-07-26T23:17:01.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers"
] | text-classification | false | mgfrantz | null | mgfrantz/deberta_v3_finetuned_predicting_effective_arguments | 19 | null | transformers | 8,664 | Entry not found |
annahaz/xlm-roberta-base-finetuned-misogyny-en-it-hi-beng | 3139e31f1d45fb39e9a5b9eb70dce82c4eb7cad1 | 2022-06-30T20:47:09.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | annahaz | null | annahaz/xlm-roberta-base-finetuned-misogyny-en-it-hi-beng | 19 | null | transformers | 8,665 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny-en-it-hi-beng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny-en-it-hi-beng
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0140
- Accuracy: 0.9970
- F1: 0.9969
- Precision: 0.9937
- Recall: 1.0
- Mae: 0.0030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3131 | 1.0 | 1759 | 0.4655 | 0.7820 | 0.7682 | 0.7855 | 0.7516 | 0.2180 |
| 0.2644 | 2.0 | 3518 | 0.3231 | 0.8619 | 0.8665 | 0.8091 | 0.9326 | 0.1381 |
| 0.2408 | 3.0 | 5277 | 0.3515 | 0.8801 | 0.8877 | 0.8071 | 0.9863 | 0.1199 |
| 0.1927 | 4.0 | 7036 | 0.1428 | 0.9514 | 0.9512 | 0.9194 | 0.9853 | 0.0486 |
| 0.1333 | 5.0 | 8795 | 0.1186 | 0.9712 | 0.9707 | 0.9478 | 0.9947 | 0.0288 |
| 0.1163 | 6.0 | 10554 | 0.0546 | 0.9879 | 0.9875 | 0.9803 | 0.9947 | 0.0121 |
| 0.0854 | 7.0 | 12313 | 0.0412 | 0.9899 | 0.9896 | 0.9804 | 0.9989 | 0.0101 |
| 0.086 | 8.0 | 14072 | 0.0252 | 0.9949 | 0.9948 | 0.9896 | 1.0 | 0.0051 |
| 0.0395 | 9.0 | 15831 | 0.0179 | 0.9965 | 0.9963 | 0.9927 | 1.0 | 0.0035 |
| 0.0343 | 10.0 | 17590 | 0.0140 | 0.9970 | 0.9969 | 0.9937 | 1.0 | 0.0030 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anahitapld/robera-base-dbd | c1f6bf7c5dff38150dffba3ee10c3edd0976cd53 | 2022-06-29T08:53:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | anahitapld | null | anahitapld/robera-base-dbd | 19 | null | transformers | 8,666 | ---
license: apache-2.0
---
|
JHart96/finetuning-sentiment-model-3000-samples | 69ce08ed13517dee4611d53707c0d625feea4201 | 2022-06-29T18:20:13.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | JHart96 | null | JHart96/finetuning-sentiment-model-3000-samples | 19 | null | transformers | 8,667 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8627450980392156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3300
- Accuracy: 0.86
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ychenNLP/arabic-ner-ace | 2ea8baa63841c8f8bec9a1d8204589b951cc7455 | 2022-07-12T20:02:24.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"en",
"dataset:ACE2005",
"transformers",
"BERT",
"token-classification",
"sequence-tagger-model",
"license:mit"
] | text-classification | false | ychenNLP | null | ychenNLP/arabic-ner-ace | 19 | 1 | transformers | 8,668 | ---
tags:
- BERT
- token-classification
- sequence-tagger-model
language:
- ar
- en
license: mit
datasets:
- ACE2005
---
# Arabic NER Model
- [Github repo](https://github.com/edchengg/GigaBERT)
- NER BIO tagging model based on [GigaBERTv4](https://huggingface.co/lanwuwei/GigaBERT-v4-Arabic-and-English).
- ACE2005 Training data: English + Arabic
- [NER tags](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-entities-guidelines-v6.6.pdf) including: PER, VEH, GPE, WEA, ORG, LOC, FAC
## Hyperparameters
- learning_rate=2e-5
- num_train_epochs=10
- weight_decay=0.01
## ACE2005 Evaluation results (F1)
| Language | Arabic | English |
|:----:|:-----------:|:----:|
| | 89.4 | 88.8 |
## How to use
```python
>>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
>>> ner_model = AutoModelForTokenClassification.from_pretrained("ychenNLP/arabic-ner-ace")
>>> ner_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-ner-ace")
>>> ner_pip = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True)
>>> output = ner_pip('Protests break out across the US after Supreme Court overturns.')
>>> print(output)
[{'entity_group': 'GPE', 'score': 0.9979881, 'word': 'us', 'start': 30, 'end': 32}, {'entity_group': 'ORG', 'score': 0.99898684, 'word': 'supreme court', 'start': 39, 'end': 52}]
>>> output = ner_pip('قال وزير العدل التركي بكير بوزداغ إن أنقرة تريد 12 مشتبهاً بهم من فنلندا و 21 من السويد')
>>> print(output)
[{'entity_group': 'PER', 'score': 0.9996214, 'word': 'وزير', 'start': 4, 'end': 8}, {'entity_group': 'ORG', 'score': 0.9952383, 'word': 'العدل', 'start': 9, 'end': 14}, {'entity_group': 'GPE', 'score': 0.9996675, 'word': 'التركي', 'start': 15, 'end': 21}, {'entity_group': 'PER', 'score': 0.9978992, 'word': 'بكير بوزداغ', 'start': 22, 'end': 33}, {'entity_group': 'GPE', 'score': 0.9997154, 'word': 'انقرة', 'start': 37, 'end': 42}, {'entity_group': 'PER', 'score': 0.9946885, 'word': 'مشتبها بهم', 'start': 51, 'end': 62}, {'entity_group': 'GPE', 'score': 0.99967396, 'word': 'فنلندا', 'start': 66, 'end': 72}, {'entity_group': 'PER', 'score': 0.99694425, 'word': '21', 'start': 75, 'end': 77}, {'entity_group': 'GPE', 'score': 0.99963355, 'word': 'السويد', 'start': 81, 'end': 87}]
```
### BibTeX entry and citation info
```bibtex
@inproceedings{lan2020gigabert,
author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan},
title = {Giga{BERT}: Zero-shot Transfer Learning from {E}nglish to {A}rabic},
booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)},
year = {2020}
}
```
|
projecte-aina/roberta-base-ca-v2-cased-pos | a7a8a3a5ff15e239233ed1543cd6f52f24cd2652 | 2022-07-25T06:58:29.000Z | [
"pytorch",
"roberta",
"token-classification",
"ca",
"dataset:universal_dependencies",
"arxiv:1907.11692",
"transformers",
"catalan",
"part of speech tagging",
"pos",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-v2-cased-pos | 19 | null | transformers | 8,669 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "part of speech tagging"
- "pos"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "universal_dependencies"
metrics:
- f1
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-base-ca-v2-cased-pos
results:
- task:
type: token-classification
dataset:
type: universal_dependencies
name: Ancora-ca-POS
metrics:
- name: F1
type: f1
value: 0.9909
widget:
- text: "Em dic Lluïsa i visc a Santa Maria del Camí."
- text: "L'Aina, la Berta i la Norma són molt amigues."
- text: "El Martí llegeix el Cavall Fort."
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Part-of-speech-tagging (POS)
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Funding](#funding)
- [Contributions](#contributions)
## Model description
The **roberta-base-ca-v2-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended Uses and Limitations
**roberta-base-ca-v2-cased-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to Use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("token-classification", model="projecte-aina/roberta-base-ca-v2-cased-pos")
example = "Em dic Lluïsa i visc a Santa Maria del Camí."
pos_results = nlp(example)
pprint(pos_results)
```
## Training
### Training data
We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.
### Training Procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and Metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | Ancora-ca-pos (F1) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-pos |99.09 |
| roberta-base-ca-cased-pos | **99.10** |
| mBERT | 98.98 |
| XLM-RoBERTa | 99.03 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Licensing Information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
## Contributions
[N/A]
|
Skelebor/book-descriptions | f94925b929bfd285ee4292bd1d3e517fbbc7449e | 2022-06-30T17:22:25.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | Skelebor | null | Skelebor/book-descriptions | 19 | null | transformers | 8,670 | Entry not found |
emilys/twitter-roberta-base-WNUT | e1907bac65d89100aa6c2c1e4b86cec6cbbfd9e6 | 2022-07-02T01:11:49.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:wnut_17",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | emilys | null | emilys/twitter-roberta-base-WNUT | 19 | null | transformers | 8,671 | ---
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter-roberta-base-WNUT
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.7045454545454546
- name: Recall
type: recall
value: 0.6303827751196173
- name: F1
type: f1
value: 0.6654040404040403
- name: Accuracy
type: accuracy
value: 0.9639611008707811
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-WNUT
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1938
- Precision: 0.7045
- Recall: 0.6304
- F1: 0.6654
- Accuracy: 0.9640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.46 | 25 | 0.3912 | 0.0 | 0.0 | 0.0 | 0.9205 |
| No log | 0.93 | 50 | 0.2847 | 0.25 | 0.0024 | 0.0047 | 0.9209 |
| No log | 1.39 | 75 | 0.2449 | 0.5451 | 0.3469 | 0.4240 | 0.9426 |
| No log | 1.85 | 100 | 0.1946 | 0.6517 | 0.4856 | 0.5565 | 0.9492 |
| No log | 2.31 | 125 | 0.1851 | 0.6921 | 0.5646 | 0.6219 | 0.9581 |
| No log | 2.78 | 150 | 0.1672 | 0.6867 | 0.5873 | 0.6331 | 0.9594 |
| No log | 3.24 | 175 | 0.1675 | 0.6787 | 0.5837 | 0.6277 | 0.9615 |
| No log | 3.7 | 200 | 0.1644 | 0.6765 | 0.6328 | 0.6539 | 0.9638 |
| No log | 4.17 | 225 | 0.1672 | 0.6997 | 0.6495 | 0.6737 | 0.9640 |
| No log | 4.63 | 250 | 0.1652 | 0.6915 | 0.6435 | 0.6667 | 0.9649 |
| No log | 5.09 | 275 | 0.1882 | 0.7067 | 0.6053 | 0.6521 | 0.9629 |
| No log | 5.56 | 300 | 0.1783 | 0.7128 | 0.6352 | 0.6717 | 0.9645 |
| No log | 6.02 | 325 | 0.1813 | 0.7011 | 0.6172 | 0.6565 | 0.9639 |
| No log | 6.48 | 350 | 0.1804 | 0.7139 | 0.6447 | 0.6776 | 0.9647 |
| No log | 6.94 | 375 | 0.1902 | 0.7218 | 0.6268 | 0.6709 | 0.9641 |
| No log | 7.41 | 400 | 0.1883 | 0.7106 | 0.6316 | 0.6688 | 0.9641 |
| No log | 7.87 | 425 | 0.1862 | 0.7067 | 0.6340 | 0.6683 | 0.9643 |
| No log | 8.33 | 450 | 0.1882 | 0.7053 | 0.6328 | 0.6671 | 0.9639 |
| No log | 8.8 | 475 | 0.1919 | 0.7055 | 0.6304 | 0.6658 | 0.9638 |
| 0.1175 | 9.26 | 500 | 0.1938 | 0.7045 | 0.6304 | 0.6654 | 0.9640 |
| 0.1175 | 9.72 | 525 | 0.1880 | 0.7025 | 0.6411 | 0.6704 | 0.9646 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
haritzpuerto/distilroberta-squad_1.1 | 583aaaed1bf76cf0f31b3086c8028c316bc29e78 | 2022-07-03T21:51:19.000Z | [
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:squad",
"transformers",
"QA",
"Question Answering",
"SQuAD",
"license:mit",
"model-index",
"autotrain_compatible"
] | question-answering | false | haritzpuerto | null | haritzpuerto/distilroberta-squad_1.1 | 19 | null | transformers | 8,672 | ---
language:
- en
tags:
- QA
- Question Answering
- SQuAD
license: "mit"
datasets:
- squad
metrics:
- squad
model-index:
- name: distilroberta-base
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: Question Answering # Optional. Example: Speech Recognition
dataset:
type: squad # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: SQuAD # Required. A pretty name for the dataset. Example: Common Voice (French)
split: validation # Optional. Example: test
metrics:
- type: squad # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 76.37653736991486 # Required. Example: 20.90
name: SQuAD EM # Optional. Example: Test WER
config: exact_match # Optional. The name of the metric configuration used in `load_metric()`. Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
- type: squad # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 84.5528918750732 # Required. Example: 20.90
name: SQuAD F1 # Optional. Example: Test WER
config: F1
---
distilroberta-base fined-tuned on SQuAD (https://huggingface.co/datasets/squad)
Hyperparameters:
- epochs: 1
- lr: 1e-5
- train batch sie: 16
- optimizer: adamW
- lr_scheduler: linear
- num warming steps: 0
- max_length: 512
Results on the dev set:
- 'exact_match': 76.37653736991486
- 'f1': 84.5528918750732
It took 1h 20 min to train on Colab. |
Morfeo/it5-base-news-summarization-finetuned-it-sum | 07501c4a7c2de4725aa7c2731e9212ed959db873 | 2022-07-04T22:02:07.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | summarization | false | Morfeo | null | Morfeo/it5-base-news-summarization-finetuned-it-sum | 19 | null | transformers | 8,673 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: it5-base-news-summarization-finetuned-it-sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# it5-base-news-summarization-finetuned-it-sum
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4506
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 12.4005 | 1.0 | 1000 | 12.3517 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.2597 | 2.0 | 2000 | 12.1695 | 0.0 | 0.0 | 0.0 | 0.0 |
| 12.0478 | 3.0 | 3000 | 11.9578 | 0.0 | 0.0 | 0.0 | 0.0 |
| 11.8364 | 4.0 | 4000 | 11.7834 | 0.0 | 0.0 | 0.0 | 0.0 |
| 11.6736 | 5.0 | 5000 | 11.6447 | 0.0 | 0.0 | 0.0 | 0.0 |
| 11.5498 | 6.0 | 6000 | 11.5447 | 0.0 | 0.0 | 0.0 | 0.0 |
| 11.4664 | 7.0 | 7000 | 11.4797 | 0.0 | 0.0 | 0.0 | 0.0 |
| 11.4209 | 8.0 | 8000 | 11.4506 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BlazeLlama/piwpaw_medium | 1583a63fd1f8c353016131be8c7ee11b95126d2e | 2022-07-13T13:21:54.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BlazeLlama | null | BlazeLlama/piwpaw_medium | 19 | null | transformers | 8,674 | Entry not found |
tanapatentlm/patentdeberta_large_spec_512_pwi | a0954d01996eb5fca536f5a438c2e687aa78475d | 2022-07-07T04:46:35.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tanapatentlm | null | tanapatentlm/patentdeberta_large_spec_512_pwi | 19 | null | transformers | 8,675 | Entry not found |
Aktsvigun/bart-base_aeslc_3198548 | 7996a33124518469016c21e99deeb49829f74b39 | 2022-07-07T15:29:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_3198548 | 19 | null | transformers | 8,676 | Entry not found |
andy-0v0/fancy-animales | f3130cd17610217f8897c6edb37b58f9b9f0603d | 2022-07-12T15:30:18.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | andy-0v0 | null | andy-0v0/fancy-animales | 19 | null | transformers | 8,677 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fancy-animales
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9464285969734192
---
# fancy-animales
Just for fun and to test the template!
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chow chow

#### panda

#### penguin

#### sloth

#### wombat
 |
KoichiYasuoka/bert-ancient-chinese-base-ud-head | 92e420b6f6e062ef0205727460510dcdb79571fa | 2022-07-20T03:51:30.000Z | [
"pytorch",
"bert",
"question-answering",
"lzh",
"dataset:universal_dependencies",
"transformers",
"classical chinese",
"literary chinese",
"ancient chinese",
"dependency-parsing",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/bert-ancient-chinese-base-ud-head | 19 | null | transformers | 8,678 | ---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "question-answering"
widget:
- text: "穴"
context: "不入虎穴不得虎子"
- text: "子"
context: "不入虎穴不得虎子"
- text: "不"
context: "[MASK]入虎穴不得虎子"
---
# bert-ancient-chinese-base-ud-head
## Model Description
This is a BERT model pre-trained on Classical Chinese texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [bert-ancient-chinese](https://huggingface.co/Jihuai/bert-ancient-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-ancient-chinese-base-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-ancient-chinese-base-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model)
print(qap(question="穴",context="不入虎穴不得虎子"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/bert-ancient-chinese-base-ud-head")
print(nlp("不入虎穴不得虎子"))
```
|
jonatasgrosman/exp_w2v2t_ja_wavlm_s729 | 34b1e881d63b949842c3f5292860b708ad1b48ca | 2022-07-08T16:56:04.000Z | [
"pytorch",
"wavlm",
"automatic-speech-recognition",
"ja",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/exp_w2v2t_ja_wavlm_s729 | 19 | 1 | transformers | 8,679 | ---
language:
- ja
license: apache-2.0
tags:
- automatic-speech-recognition
- ja
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_ja_wavlm_s729
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Dithya/Text_simplify | 46c6ddf32748ebc93783ee8f26bc39fb719c48d9 | 2022-07-10T14:56:10.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Dithya | null | Dithya/Text_simplify | 19 | null | transformers | 8,680 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5849
- Rouge1: 90.2463
- Rouge2: 83.7826
- Rougel: 89.3909
- Rougelsum: 89.6832
- Gen Len: 41.5878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
heoji/koelectra_senti_1 | cac8e255516046b39b8a0e656d05107b8f368abb | 2022-07-11T06:27:28.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | heoji | null | heoji/koelectra_senti_1 | 19 | null | transformers | 8,681 | Entry not found |
Shaier/medqa_fine_tuned_generic_bert | 85b80c88a58f5039fced4993e02bafbc1091133b | 2022-07-12T20:33:17.000Z | [
"pytorch",
"bert",
"multiple-choice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | Shaier | null | Shaier/medqa_fine_tuned_generic_bert | 19 | null | transformers | 8,682 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: medqa_fine_tuned_generic_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medqa_fine_tuned_generic_bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Accuracy: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.3851 | 0.2594 |
| 1.3896 | 2.0 | 636 | 1.3805 | 0.2807 |
| 1.3896 | 3.0 | 954 | 1.3852 | 0.2948 |
| 1.3629 | 4.0 | 1272 | 1.3996 | 0.2980 |
| 1.3068 | 5.0 | 1590 | 1.4239 | 0.2869 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
casasdorjunior/t5-small-finetuned-cc-news-es-titles | ed34c8ab22668164d6f00dc2d7e8e9ac8489debc | 2022-07-13T08:52:55.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cc-news-es-titles",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | casasdorjunior | null | casasdorjunior/t5-small-finetuned-cc-news-es-titles | 19 | null | transformers | 8,683 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cc-news-es-titles
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cc-news-es-titles
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cc-news-es-titles
type: cc-news-es-titles
args: default
metrics:
- name: Rouge1
type: rouge
value: 16.701
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cc-news-es-titles
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cc-news-es-titles dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6383
- Rouge1: 16.701
- Rouge2: 4.1265
- Rougel: 14.8175
- Rougelsum: 14.8193
- Gen Len: 18.9159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| 2.8439 | 1.0 | 23133 | 2.6383 | 16.701 | 4.1265 | 14.8175 | 14.8193 | 18.9159 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KeLiu/QETRA_JavaScript | 67060f63684f778d913985213046cfc1e6fb2f9b | 2022-07-13T14:33:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KeLiu | null | KeLiu/QETRA_JavaScript | 19 | null | transformers | 8,684 | Entry not found |
nakamura196/trocr-small-hi | c29f5f6f73bcea19c20755c04546b42bd676d1e7 | 2022-07-15T19:35:38.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
] | null | false | nakamura196 | null | nakamura196/trocr-small-hi | 19 | null | transformers | 8,685 | Entry not found |
Team-PIXEL/pixel-base-finetuned-stsb | 42e8f936183c00950a09d09cfeb5bc23cf719332 | 2022-07-15T03:04:45.000Z | [
"pytorch",
"pixel",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Team-PIXEL | null | Team-PIXEL/pixel-base-finetuned-stsb | 19 | null | transformers | 8,686 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-stsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-stsb
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE STSB dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 15000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
sagawa/t5-demo | 80e4a27faee00724c42b4d2c1c31287c1a08bc58 | 2022-07-16T10:06:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"chemistry",
"autotrain_compatible"
] | text2text-generation | false | sagawa | null | sagawa/t5-demo | 19 | null | transformers | 8,687 | ---
tags:
- chemistry
---
# Chemt5: t5 trained on ZINC data.
This is a demo of Chemt5. |
ipvikas/distilbert-base-uncased-finetuned-imdb | d1477d1bb61e6b02a5864e16a71de8b59f33efe6 | 2022-07-16T11:06:52.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ipvikas | null | ipvikas/distilbert-base-uncased-finetuned-imdb | 19 | null | transformers | 8,688 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
pnr-svc/ConvBert-Sentiment-Analysis-Turkish | 44a653e655d3190de204856d36964c42f09a106d | 2022-07-20T21:37:09.000Z | [
"pytorch",
"convbert",
"text-classification",
"dataset:pnr-svc/Turkish-Multiclass-Dataset",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | pnr-svc | null | pnr-svc/ConvBert-Sentiment-Analysis-Turkish | 19 | null | transformers | 8,689 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- pnr-svc/Turkish-Multiclass-Dataset
metrics:
- accuracy
model-index:
- name: multi-class-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: pnr-svc/Turkish-Multiclass-Dataset
type: pnr-svc/Turkish-Multiclass-Dataset
args: TurkishMulticlassDataset
metrics:
- name: Accuracy
type: accuracy
value: 0.859
- task:
type: text-classification
name: Text Classification
dataset:
name: pnr-svc/Turkish-Multiclass-Dataset
type: pnr-svc/Turkish-Multiclass-Dataset
config: TurkishMulticlassDataset
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.859
verified: true
- name: loss
type: loss
value: 0.4957726299762726
verified: true
---
# multi-class-classification
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-cased](https://huggingface.co/dbmdz/convbert-base-turkish-cased) on the pnr-svc/Turkish-Multiclass-Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.495773
- Accuracy: 0.859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 16
- eval_batch_size: 16
- num_epochs:6
### Training results
| Training Loss | Epoch | Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.495773 | 6.0 | 0.4957 | 0.859 |
|
rsuwaileh/IDRISI-LMR-EN-timebased-typeless | 517eb71a8e48f9fc962c437db6f912256c717ae1 | 2022-07-20T14:59:47.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | rsuwaileh | null | rsuwaileh/IDRISI-LMR-EN-timebased-typeless | 19 | null | transformers | 8,690 | ---
license: apache-2.0
---
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text without predicting their location types.
The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-less` LMR mode and using the `Time-based` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-timebased-bilou/). All Location types in the data were normalized to the `LOC` tag. More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
* Arabic models are also available:
- [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/)
To cite the models:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
pritoms/opt-350m-finetuned-stack | bcab1dd7e2ba9d897977460f0f24975402d71468 | 2022-07-18T11:14:18.000Z | [
"pytorch",
"tensorboard",
"opt",
"text-generation",
"transformers",
"generated_from_trainer",
"license:other",
"model-index"
] | text-generation | false | pritoms | null | pritoms/opt-350m-finetuned-stack | 19 | null | transformers | 8,691 | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-350m-finetuned-stack
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-finetuned-stack
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ClassCat/roberta-small-greek | f311409a1f248c3b18a6b73b9de744fc35bedfe1 | 2022-07-21T11:01:55.000Z | [
"pytorch",
"roberta",
"fill-mask",
"el",
"dataset:cc100",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | ClassCat | null | ClassCat/roberta-small-greek | 19 | 1 | transformers | 8,692 | ---
language: el
license: cc-by-sa-4.0
datasets:
- cc100
- oscar
- wikipedia
widget:
- text: "Δεν την έχω <mask> ποτέ."
- text: "Έχει πολύ καιρό που δεν έχουμε <mask>."
- text: "Ευχαριστώ για το <mask> σου."
- text: "Αυτό είναι <mask>."
- text: "Ανοιξα <mask>."
- text: "Ευχαριστώ για <mask>."
- text: "Έχει πολύ καιρό που δεν <mask>."
---
## RoBERTa Greek small model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses approximately half the size of RoBERTa base model parameters.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* Subset of [CC-100/el](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
* Subset of [oscar](https://huggingface.co/datasets/oscar)
* [wiki40b/el](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bel) (Greek Wikipedia)
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-small-greek')
unmasker("Έχει πολύ καιρό που δεν <mask>.")
``` |
nyorain/xtremedistil-l12-h384-uncased-natural-questions | ed706ec743a2c9607d9eb0b661f88c63a3f1b9b9 | 2022-07-22T21:31:38.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | nyorain | null | nyorain/xtremedistil-l12-h384-uncased-natural-questions | 19 | null | transformers | 8,693 | ---
language: en
tags:
- question-answering
license: mit
---
xtremedistil-l12-h384-uncased model trained on a subset of "Natural Questions Short".
Done for the "Deep Learning for Natural Language Processing" course at TU Darmstadt.
Group 69.
Squad metric:
- 'exact_match': 40.217
- 'f1': 62.3873 |
huggingtweets/vgdunkey-vgdunkeybot | 409603618ac868cf2b0006e7b0d3cca7e841a283 | 2022-07-23T21:18:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/vgdunkey-vgdunkeybot | 19 | null | transformers | 8,694 | ---
language: en
thumbnail: http://www.huggingtweets.com/vgdunkey-vgdunkeybot/1658611112335/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/676614171849453568/AZd1Bh-s_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/727879199931944961/vkkeC6d2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">dunkey & dunkey bot</div>
<div style="text-align: center; font-size: 14px;">@vgdunkey-vgdunkeybot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from dunkey & dunkey bot.
| Data | dunkey | dunkey bot |
| --- | --- | --- |
| Tweets downloaded | 1282 | 3200 |
| Retweets | 147 | 0 |
| Short tweets | 327 | 526 |
| Tweets kept | 808 | 2674 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/208r9p27/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vgdunkey-vgdunkeybot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m3it0jfs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m3it0jfs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vgdunkey-vgdunkeybot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
onon214/transformer-NLP | 6812c81f88d22b05b5eebfb3332b5589bbfaeb2b | 2022-07-24T09:41:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | onon214 | null | onon214/transformer-NLP | 19 | null | transformers | 8,695 | ---
tags:
- generated_from_trainer
model-index:
- name: transformer-NLP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transformer-NLP
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.8223 | 1.0 | 21 | 9.4635 |
| 9.4003 | 2.0 | 42 | 9.2399 |
| 9.1754 | 3.0 | 63 | 9.0618 |
| 8.9665 | 4.0 | 84 | 8.8478 |
| 8.8297 | 5.0 | 105 | 8.7369 |
| 8.6993 | 6.0 | 126 | 8.6474 |
| 8.6372 | 7.0 | 147 | 8.5848 |
| 8.5375 | 8.0 | 168 | 8.4988 |
| 8.5175 | 9.0 | 189 | 8.4400 |
| 8.4955 | 10.0 | 210 | 8.4503 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9 | 9639384a87c117aaf66717d2465d7e8a1be74b1c | 2022-07-28T11:28:05.000Z | [
"pytorch",
"longt5",
"text2text-generation",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | pszemraj | null | pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9 | 19 | null | transformers | 8,696 | ---
tags:
- summarization
- summary
- booksum
- long-document
- long-form
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: false
model-index:
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 35.9969
verified: true
- name: ROUGE-2
type: rouge
value: 5.9272
verified: true
- name: ROUGE-L
type: rouge
value: 16.0136
verified: true
- name: ROUGE-LSUM
type: rouge
value: 32.941
verified: true
- name: loss
type: loss
value: 2.9339466094970703
verified: true
- name: gen_len
type: gen_len
value: 283.7198
verified: true
---
# README - long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
- latest version, testing metrics here
- created 2022-07-26_21-46-01
|
dminiotas05/distilbert-base-uncased-finetuned-ft750_reg1 | c0a35062be873c1bb645d854a032cd8dfbadf08f | 2022-07-27T09:38:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dminiotas05 | null | dminiotas05/distilbert-base-uncased-finetuned-ft750_reg1 | 19 | null | transformers | 8,697 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-ft750_reg1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft750_reg1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1787 | 1.0 | 188 | 1.4769 |
| 0.7256 | 2.0 | 376 | 1.0639 |
| 0.5268 | 3.0 | 564 | 0.9304 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
korca/roberta-base-lkm | 0ad1f72d82eccc9fefb56ada727a34b3ddf18376 | 2022-07-28T18:56:28.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | korca | null | korca/roberta-base-lkm | 19 | null | transformers | 8,698 | Entry not found |
AccurateIsaiah/DialoGPT-small-mozarkv2 | d3fa28c051bb64abb44690bdaa44d11321cd43b3 | 2021-11-23T21:49:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | AccurateIsaiah | null | AccurateIsaiah/DialoGPT-small-mozarkv2 | 18 | null | transformers | 8,699 | ---
tags:
- conversational
---
# Mozark's Brain Uploaded to Hugging Face but v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.