modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lazyturtl/roomclassifier | 9459b019773ca1279fe099c515762acf5e06b71e | 2022-03-31T01:09:57.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | lazyturtl | null | lazyturtl/roomclassifier | 73 | null | transformers | 5,300 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomclassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9402984976768494
---
# roomclassifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### Laundry room

#### Livingroom
 |
Nonem100/Test-Model | 57666cfe40a32679221ada968a18cbffb8254b64 | 2022-03-31T15:19:38.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Nonem100 | null | Nonem100/Test-Model | 73 | null | transformers | 5,301 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Test-Model
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9017857313156128
---
# Test-Model
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cotton candy

#### hamburger

#### hot dog

#### nachos

#### popcorn
 |
nickmuchi/swin-tiny-patch4-window7-224-finetuned-eurosat | 55f500ccbd1ee1c7878e51ff889faf0a0327c708 | 2022-05-24T02:08:03.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"dataset:nielsr/eurosat-demo",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nickmuchi | null | nickmuchi/swin-tiny-patch4-window7-224-finetuned-eurosat | 73 | null | transformers | 5,302 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
- nielsr/eurosat-demo
widget:
- src: https://drive.google.com/uc?id=1trKgvkMRQ3BB0VcqnDwmieLxXhWmS8rq
example_title: Annual Crop
- src: https://drive.google.com/uc?id=1kWQbPNHVa_JscS0age5E0UOSBcU1bh18
example_title: Forest
- src: https://drive.google.com/uc?id=12YbxF-MfpMqLPB91HuTPEgcg1xnZKhGP
example_title: Herbaceous Vegetation
- src: https://drive.google.com/uc?id=1NkzDiaQ1ciMDf89C8uA5zGx984bwkFCi
example_title: Highway
- src: https://drive.google.com/uc?id=1F6r7O0rlgzaPvY6XBpFOWUTIddEIUkxx
example_title: Industrial
- src: https://drive.google.com/uc?id=16zOtFHZ9E17jA9Ua4PsXrUjugSs77XKm
example_title: Pasture
- src: https://drive.google.com/uc?id=163tqIdoVY7WFtKQlpz_bPM9WjwbJAtd
example_title: Permanent Crop
- src: https://drive.google.com/uc?id=1qsX-XsrE3dMp7C7LLVa6HriaABIXuBrJ
example_title: Residential
- src: https://drive.google.com/uc?id=1UK2praQHbNXDnctJt58rrlQZu84lxyk
example_title: River
- src: https://drive.google.com/uc?id=1zVAfR7N5hXy6eq1cVOd8bXPjC1sqxVir
example_title: Sea Lake
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9848148148148148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0536
- Accuracy: 0.9848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2602 | 1.0 | 190 | 0.1310 | 0.9563 |
| 0.1975 | 2.0 | 380 | 0.1063 | 0.9637 |
| 0.142 | 3.0 | 570 | 0.0642 | 0.9767 |
| 0.1235 | 4.0 | 760 | 0.0560 | 0.9837 |
| 0.1019 | 5.0 | 950 | 0.0536 | 0.9848 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-japanese-luw-upos | 7cbf54c18a6139d57cb47b0bc2e97bf87a9c3191 | 2022-07-23T14:43:41.000Z | [
"pytorch",
"deberta-v2",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-luw-upos | 73 | null | transformers | 5,303 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# deberta-base-japanese-luw-upos
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [deberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-japanese-luw-upos")
s="国境の長いトンネルを抜けると雪国であった。"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-base-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
roydcarlson/grain | 36e78c0a9507f36b58a71e2f9f5c859af2d9537f | 2022-05-27T17:01:52.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | roydcarlson | null | roydcarlson/grain | 73 | null | transformers | 5,304 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: grain
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6607142686843872
---
# grain
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### barley

#### buckwheat

#### millet

#### teff

#### wheat
 |
KL/swin-tiny-patch4-window7-224-finetuned-eurosat | ccbaa5f4cf45f5d4ee5eac3950bca6a8c293faf1 | 2022-05-29T12:07:22.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"transformers"
] | image-classification | false | KL | null | KL/swin-tiny-patch4-window7-224-finetuned-eurosat | 73 | null | transformers | 5,305 | Entry not found |
RUCAIBox/mtl-data-to-text | ed44b68242f30903d42abf0f13c1d9af5c1bb8f8 | 2022-06-27T02:27:10.000Z | [
"pytorch",
"mvp",
"en",
"arxiv:2206.12131",
"transformers",
"text-generation",
"text2text-generation",
"license:apache-2.0"
] | text2text-generation | false | RUCAIBox | null | RUCAIBox/mtl-data-to-text | 73 | null | transformers | 5,306 | ---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man"
example_title: "Example1"
- text: "Describe the following data: First Clearing | LOCATION | On NYS 52 1 Mi. Youngsville [SEP] On NYS 52 1 Mi. Youngsville | CITY_OR_TOWN | Callicoon, New York"
example_title: "Example2"
---
# MTL-data-to-text
The MTL-data-to-text model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-data-to-text is supervised pre-trained using a mixture of labeled data-to-text datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-data-to-text is specially designed for data-to-text generation tasks, such as KG-to-text generation (WebNLG, DART), table-to-text generation (WikiBio, ToTTo) and MR-to-text generation (E2E).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
>>> inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
aws-ai/dse-bert-base | 918ad931256ade24add8b1840a710e9e96bc9b40 | 2022-07-10T19:43:15.000Z | [
"pytorch",
"bert",
"transformers"
] | null | false | aws-ai | null | aws-ai/dse-bert-base | 73 | null | transformers | 5,307 | Entry not found |
CLTL/gm-ner-xlmrbase | 120252d7c808f3997ca6423a57087077273f79e0 | 2021-11-09T16:14:39.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"token-classification",
"nl",
"transformers",
"dighum",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | CLTL | null | CLTL/gm-ner-xlmrbase | 72 | null | transformers | 5,308 | ---
language: nl
license: apache-2.0
tags:
- dighum
pipeline_tag: token-classification
---
# Early-modern Dutch NER (General Letters)
## Description
This is a fine-tuned NER model for early-modern Dutch United East India Company (VOC) letters based on XLM-R_base [(Conneau et al., 2020)](https://aclanthology.org/2020.acl-main.747/). The model identifies *locations*, *persons*, *organisations*, but also *ships* as well as derived forms of locations and religions.
## Intended uses and limitations
This model was fine-tuned (trained, validated and tested) on a single source of data, the General Letters (Generale Missiven). These letters span a large variety of Dutch, as they cover the largest part of the 17th and 18th centuries, and have been extended with editorial notes between 1960 and 2017. As the model was only fine-tuned on this data however, it may perform less well on other texts from the same period.
## How to use
The model can run on raw text through the *token-classification* pipeline:
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("CLTL/gm-ner-xlmrbase")
model = AutoModelForTokenClassification.from_pretrained("CLTL/gm-ner-xlmrbase")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Batavia heeft om advies gevraagd."
ner_results = nlp(example)
print(ner_results)
```
This outputs a list of entities with their character offsets in the input text:
```
[{'entity': 'B-LOC', 'score': 0.99739265, 'index': 1, 'word': '▁Bata', 'start': 0, 'end': 4}, {'entity': 'I-LOC', 'score': 0.5373179, 'index': 2, 'word': 'via', 'start': 4, 'end': 7}]
```
## Training data and tagset
The model was fine-tuned on the General Letters [GM-NER](https://github.com/cltl/voc-missives/tree/master/data/ner/datasplit_all_standard) dataset, with the following tagset:
| tag | description | notes |
| --- | ----------- | ----- |
| LOC | locations | |
| LOCderiv | derived forms of locations | by derivation, e.g. *Bandanezen*, or composition, e.g. *Javakoffie* |
| ORG | organisations | includes forms derived by composition, e.g. *Compagnieszaken*
| PER | persons |
| RELderiv | forms related to religion | merges religion names (*Christendom*), derived forms (*christenen*) and composed forms (*Christen-orangkay*) |
| SHP | ships |
The base text for this dataset is OCR text that has been partially corrected. The text is clean overall but errors remain.
## Training procedure
The model was fine-tuned with [xlm-roberta-base](https://huggingface.co/xlm-roberta-base), using [this script](https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py).
Non-default training parameters are:
* training batch size: 16
* max sequence length: 256
* number of epochs: 4 -- loading the best checkpoint model by loss at the end, with checkpoints every 200 steps
* (seed: 1)
## Evaluation
### Metric
* entity-level F1
### Results
| overall | 92.7 |
| --- | ----------- |
| LOC | 95.8 |
| LOCderiv | 92.7 |
| ORG | 92.5 |
| PER | 86.2 |
| RELderiv | 90.7 |
| SHP | 81.6 |
## Reference
The model and fine-tuning data presented here were developed as part of:
```bibtex
@inproceedings{arnoult-etal-2021-batavia,
title = "Batavia asked for advice. Pretrained language models for Named Entity Recognition in historical texts.",
author = "Arnoult, Sophie I. and
Petram, Lodewijk and
Vossen, Piek",
booktitle = "Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic (online)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.latechclfl-1.3",
pages = "21--30"
}
```
|
Helsinki-NLP/opus-mt-aav-en | f0d56d0d1bb26a58faa0a70d8804809a58e6a06d | 2021-01-18T07:45:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vi",
"km",
"aav",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-aav-en | 72 | null | transformers | 5,309 | ---
language:
- vi
- km
- aav
- en
tags:
- translation
license: apache-2.0
---
### aav-eng
* source group: Austro-Asiatic languages
* target group: English
* OPUS readme: [aav-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md)
* model: transformer
* source language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hoc-eng.hoc.eng | 0.3 | 0.095 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.115 |
| Tatoeba-test.khm-eng.khm.eng | 8.9 | 0.271 |
| Tatoeba-test.mnw-eng.mnw.eng | 0.8 | 0.118 |
| Tatoeba-test.multi.eng | 24.8 | 0.391 |
| Tatoeba-test.vie-eng.vie.eng | 38.7 | 0.567 |
### System Info:
- hf_name: aav-eng
- source_languages: aav
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'km', 'aav', 'en']
- src_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt
- src_alpha3: aav
- tgt_alpha3: eng
- short_pair: aav-en
- chrF2_score: 0.391
- bleu: 24.8
- brevity_penalty: 0.968
- ref_len: 36693.0
- src_name: Austro-Asiatic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: aav
- tgt_alpha2: en
- prefer_old: False
- long_pair: aav-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-it-vi | 043beee1bbe972313387181b4fd1d4796a15fe0a | 2020-08-21T14:42:46.000Z | [
"pytorch",
"marian",
"text2text-generation",
"it",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-it-vi | 72 | null | transformers | 5,310 | ---
language:
- it
- vi
tags:
- translation
license: apache-2.0
---
### ita-vie
* source group: Italian
* target group: Vietnamese
* OPUS readme: [ita-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-vie/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.vie | 36.2 | 0.535 |
### System Info:
- hf_name: ita-vie
- source_languages: ita
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'vi']
- src_constituents: {'ita'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.test.txt
- src_alpha3: ita
- tgt_alpha3: vie
- short_pair: it-vi
- chrF2_score: 0.535
- bleu: 36.2
- brevity_penalty: 1.0
- ref_len: 2144.0
- src_name: Italian
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: it
- tgt_alpha2: vi
- prefer_old: False
- long_pair: ita-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ru-vi | 5fc954aae39caa5f6f65dc8837328254d4927b07 | 2020-08-21T14:42:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ru",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ru-vi | 72 | null | transformers | 5,311 | ---
language:
- ru
- vi
tags:
- translation
license: apache-2.0
---
### rus-vie
* source group: Russian
* target group: Vietnamese
* OPUS readme: [rus-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-vie/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.vie | 16.9 | 0.346 |
### System Info:
- hf_name: rus-vie
- source_languages: rus
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'vi']
- src_constituents: {'rus'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: vie
- short_pair: ru-vi
- chrF2_score: 0.34600000000000003
- bleu: 16.9
- brevity_penalty: 1.0
- ref_len: 2566.0
- src_name: Russian
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: vi
- prefer_old: False
- long_pair: rus-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-vi-de | 5732a1f19967c1ba48e9ac85428f4e6cfea6ecc3 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vi",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-vi-de | 72 | null | transformers | 5,312 | ---
language:
- vi
- de
tags:
- translation
license: apache-2.0
---
### vie-deu
* source group: Vietnamese
* target group: German
* OPUS readme: [vie-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-deu/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.deu | 27.6 | 0.484 |
### System Info:
- hf_name: vie-deu
- source_languages: vie
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'de']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: deu
- short_pair: vi-de
- chrF2_score: 0.484
- bleu: 27.6
- brevity_penalty: 0.958
- ref_len: 3365.0
- src_name: Vietnamese
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: de
- prefer_old: False
- long_pair: vie-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Maltehb/danish-bert-botxo-ner-dane | 0535804050650a7c1dde9b51af68b1e039d6df0a | 2021-11-12T08:36:46.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:dindebat.dk",
"dataset:hestenettet.dk",
"dataset:danish OpenSubtitles",
"transformers",
"danish",
"masked-lm",
"botxo",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | Maltehb | null | Maltehb/danish-bert-botxo-ner-dane | 72 | 1 | transformers | 5,313 | ---
language: da
tags:
- danish
- bert
- masked-lm
- botxo
license: cc-by-4.0
datasets:
- common_crawl
- wikipedia
- dindebat.dk
- hestenettet.dk
- danish OpenSubtitles
widget:
- text: "Chili Jensen, som bor på Danmarksgade 12, køber chilifrugter fra Netto."
---
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO) finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
Humongous amounts of credit needs to go to [Certainly](https://certainly.io/) (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1).
Here is an example on how to load Danish BERT for token classification in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/danish-bert-botxo-ner-dane")
```
### References
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20DanishBERTUncasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ |
Muennighoff/SGPT-1.3B-weightedmean-nli-bitfit | 21ac01bac24bf051aa64428d105d95921ec4e562 | 2022-06-18T13:04:47.000Z | [
"pytorch",
"gpt_neo",
"feature-extraction",
"arxiv:2202.08904",
"sentence-transformers",
"sentence-similarity"
] | sentence-similarity | false | Muennighoff | null | Muennighoff/SGPT-1.3B-weightedmean-nli-bitfit | 72 | null | sentence-transformers | 5,314 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# SGPT-1.3B-weightedmean-nli-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 93941 with parameters:
```
{'batch_size': 6}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 9394,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 9395,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
Rostlab/prot_t5_xxl_bfd | 34a420890330b9335d7292c36d8950c7952f09c9 | 2020-12-11T10:20:10.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | Rostlab | null | Rostlab/prot_t5_xxl_bfd | 72 | null | transformers | 5,315 | Entry not found |
aloxatel/mbert | cce439353fe629e6fdb88d10cb326d0a7a405a02 | 2021-05-19T11:43:34.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | aloxatel | null | aloxatel/mbert | 72 | 1 | transformers | 5,316 | Entry not found |
cambridgeltl/tacl-bert-base-chinese | c86daf0753de79319b2066897a54c6cae64daf85 | 2021-10-28T17:51:55.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/tacl-bert-base-chinese | 72 | null | transformers | 5,317 | Entry not found |
jpcorb20/toxic-detector-distilroberta | 88d1b244e128ed29bc23a68338258784cf2e4008 | 2021-05-20T17:25:58.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | jpcorb20 | null | jpcorb20/toxic-detector-distilroberta | 72 | 1 | transformers | 5,318 | # Distilroberta for toxic comment detection
See my GitHub repo [toxic-comment-server](https://github.com/jpcorb20/toxic-comment-server)
The model was trained from [DistilRoberta](https://huggingface.co/distilroberta-base) on [Kaggle Toxic Comments](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) with the BCEWithLogits loss for Multi-Label prediction. Thus, please use the sigmoid activation on the logits (not made to use the softmax output, e.g. like the HF widget).
## Evaluation
F1 scores:
toxic: 0.72
severe_toxic: 0.38
obscene: 0.72
threat: 0.52
insult: 0.69
identity_hate: 0.60
Macro-F1: 0.61 |
mrm8488/distilbert-base-multi-cased-finetuned-typo-detection | 3d191639cca2821fbfebef7c779a2bba6228a6bb | 2020-12-11T21:53:44.000Z | [
"pytorch",
"distilbert",
"token-classification",
"multilingual",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/distilbert-base-multi-cased-finetuned-typo-detection | 72 | null | transformers | 5,319 | ---
language: multilingual
thumbnail:
---
# DISTILBERT 🌎 + Typo Detection ✍❌✍✔
[distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) fine-tuned on [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) for **typo detection** (using *NER* style)
## Details of the downstream task (Typo detection as NER)
- Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚 for 15 languages
- [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py) 🏋️♂️
## Metrics on test set 📋
| Metric | # score |
| :-------: | :-------: |
| F1 | **93.51** |
| Precision | **96.08** |
| Recall | **91.06** |
## Model in action 🔨
Fast usage with **pipelines** 🧪
```python
from transformers import pipeline
typo_checker = pipeline(
"ner",
model="mrm8488/distilbert-base-multi-cased-finetuned-typo-detection",
tokenizer="mrm8488/distilbert-base-multi-cased-finetuned-typo-detection"
)
result = typo_checker("Adddd validation midelware")
result[1:-1]
# Output:
[{'entity': 'ok', 'score': 0.7128152847290039, 'word': 'add'},
{'entity': 'typo', 'score': 0.5388424396514893, 'word': '##dd'},
{'entity': 'ok', 'score': 0.94792640209198, 'word': 'validation'},
{'entity': 'typo', 'score': 0.5839331746101379, 'word': 'mid'},
{'entity': 'ok', 'score': 0.5195121765136719, 'word': '##el'},
{'entity': 'ok', 'score': 0.7222476601600647, 'word': '##ware'}]
```
It works🎉! We typed wrong ```Add and middleware```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
nateraw/rare-puppers-09-04-2021 | 55954fde8839b77f629c664ef2a9626181f2796b | 2021-09-04T20:46:06.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/rare-puppers-09-04-2021 | 72 | null | transformers | 5,320 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers-09-04-2021
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8657407164573669
---
# rare-puppers-09-04-2021
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
saburbutt/roberta_base_tweetqa_model | 433145b954f69bff58a02725757f9f1f33b50e06 | 2021-05-20T19:58:30.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | saburbutt | null | saburbutt/roberta_base_tweetqa_model | 72 | null | transformers | 5,321 | Entry not found |
stevenshoemaker/horror | de0fda6abd5856125fe2c236c8ca7cb1b58c0fcc | 2021-05-23T12:56:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stevenshoemaker | null | stevenshoemaker/horror | 72 | null | transformers | 5,322 | Entry not found |
facebook/wav2vec2-base-es-voxpopuli-v2 | b982ca9b90f554145513d3a5e524f65bb6f20be0 | 2022-02-27T13:11:53.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"es",
"dataset:voxpopuli",
"arxiv:2101.00390",
"transformers",
"audio",
"automatic-speech-recognition",
"voxpopuli-v2",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-es-voxpopuli-v2 | 72 | null | transformers | 5,323 | ---
language: es
tags:
- audio
- automatic-speech-recognition
- voxpopuli-v2
datasets:
- voxpopuli
license: cc-by-nc-4.0
inference: false
---
# Wav2Vec2-base-VoxPopuli-V2
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **es** on **21.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **es**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model.
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*.
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
|
nickmuchi/vit-finetuned-cats-dogs | 5cefa517e61aa63ca6b1642d887c3a65b233ef34 | 2022-03-01T13:15:13.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nickmuchi | null | nickmuchi/vit-finetuned-cats-dogs | 72 | null | transformers | 5,324 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
widget:
- src: https://cdn.pixabay.com/photo/2021/09/19/12/19/animal-6637774_1280.jpg
example_title: Dog
- src: https://cdn.pixabay.com/photo/2017/02/20/18/03/cat-2083492_1280.jpg
example_title: Cat
model-index:
- name: vit-finetuned-cats-dogs
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9971014261245728
---
# vit-finetuned-cats-dogs
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog
 |
hafidber/fruits | 28be4b5394f7cdaf3b8018e7f93e10552dbe7a27 | 2022-04-07T15:02:57.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | hafidber | null | hafidber/fruits | 72 | null | transformers | 5,325 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fruits
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9910714030265808
---
# fruits
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### apple

#### banana

#### grape

#### kiwi

#### lemon
 |
nielsr/swin-tiny-patch4-window7-224-finetuned-cifar10 | b21db45e5b3fbd1e83f9787e07f3fe80ad254206 | 2022-04-11T12:19:54.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nielsr | null | nielsr/swin-tiny-patch4-window7-224-finetuned-cifar10 | 72 | null | transformers | 5,326 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9788888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-cifar10
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0690
- Accuracy: 0.9789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2446 | 1.0 | 190 | 0.1128 | 0.9659 |
| 0.1722 | 2.0 | 380 | 0.1034 | 0.9663 |
| 0.1355 | 3.0 | 570 | 0.0690 | 0.9789 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Zayn/VIT_Basic | d8ecb81a9b939600d0e850c6e3c160c7a14cc37e | 2022-04-22T16:19:34.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Zayn | null | Zayn/VIT_Basic | 72 | null | transformers | 5,327 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: VIT_Basic
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9107142686843872
---
# VIT_Basic
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chairs

#### hot dog

#### ice cream

#### ladders

#### tables
 |
Gunulhona/tbstmodel_v3 | 488b4153082302af6fc4a20d151ce031b80e3dfb | 2022-07-30T08:34:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Gunulhona | null | Gunulhona/tbstmodel_v3 | 72 | null | transformers | 5,328 | Entry not found |
mrm8488/data2vec-base-finetuned-imagenet1k | 3bfe19761dfeb217806b7302baa7355f7e538f0f | 2022-05-04T14:55:41.000Z | [
"pytorch",
"data2vec-vision",
"image-classification",
"transformers"
] | image-classification | false | mrm8488 | null | mrm8488/data2vec-base-finetuned-imagenet1k | 72 | null | transformers | 5,329 | Entry not found |
Ahmed9275/ALL-test | a6697d6f66bd8c8800fc580debdb85fe93439819 | 2022-05-05T23:55:05.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Ahmed9275 | null | Ahmed9275/ALL-test | 72 | null | transformers | 5,330 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: ALL-test
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9572474360466003
---
# ALL-test
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
zzzzzzttt/vit-base-patch16-224-finetuned-eurosat | d1af73b119967cf26591363e14753b10d1b5718a | 2022-05-06T05:29:18.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | zzzzzzttt | null | zzzzzzttt/vit-base-patch16-224-finetuned-eurosat | 72 | null | transformers | 5,331 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9071691176470589
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.9072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5417 | 0.99 | 76 | 0.5556 | 0.8263 |
| 0.4853 | 1.99 | 152 | 0.5319 | 0.8199 |
| 0.4926 | 2.99 | 228 | 0.5133 | 0.8539 |
| 0.4131 | 3.99 | 304 | 0.4481 | 0.8603 |
| 0.4081 | 4.99 | 380 | 0.4280 | 0.8824 |
| 0.3287 | 5.99 | 456 | 0.4330 | 0.8667 |
| 0.3381 | 6.99 | 532 | 0.3549 | 0.8888 |
| 0.3182 | 7.99 | 608 | 0.3382 | 0.8961 |
| 0.3046 | 8.99 | 684 | 0.3790 | 0.8925 |
| 0.3093 | 9.99 | 760 | 0.3209 | 0.9072 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
karthiksv/vit-base-beans | da7e199b23fe32e5145d756571e762c1a50d603f | 2022-05-12T15:21:37.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | karthiksv | null | karthiksv/vit-base-beans | 72 | null | transformers | 5,332 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
model-index:
- name: vit-base-beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.1
- Datasets 2.1.0
- Tokenizers 0.12.1
|
KoichiYasuoka/deberta-base-japanese-aozora | 4cfed2b76e0089667aec79fb4fce318939282cc8 | 2022-07-23T14:43:28.000Z | [
"pytorch",
"deberta-v2",
"fill-mask",
"ja",
"transformers",
"japanese",
"masked-lm",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | fill-mask | false | KoichiYasuoka | null | KoichiYasuoka/deberta-base-japanese-aozora | 72 | null | transformers | 5,333 | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# deberta-base-japanese-aozora
## Model Description
This is a DeBERTa(V2) model pre-trained on 青空文庫 texts. You can fine-tune `deberta-base-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora")
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
Annabelleabbott/swin-tiny-patch4-window7-224-finetuned-eurosat | b854e399d64a2351130b6814af6755313f787a0c | 2022-05-25T15:56:42.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | Annabelleabbott | null | Annabelleabbott/swin-tiny-patch4-window7-224-finetuned-eurosat | 72 | null | transformers | 5,334 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9725925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0767
- Accuracy: 0.9726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2548 | 1.0 | 190 | 0.1162 | 0.9652 |
| 0.1544 | 2.0 | 380 | 0.0894 | 0.9719 |
| 0.1182 | 3.0 | 570 | 0.0767 | 0.9726 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
GRANTHE2761/swin-tiny-patch4-window7-224-finetuned-eurosat | b37ddc121c456f71672a41cc430af49eee88e966 | 2022-05-26T09:00:52.000Z | [
"pytorch",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | GRANTHE2761 | null | GRANTHE2761/swin-tiny-patch4-window7-224-finetuned-eurosat | 72 | null | transformers | 5,335 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9688888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0866
- Accuracy: 0.9689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3046 | 1.0 | 95 | 0.1547 | 0.9452 |
| 0.191 | 2.0 | 190 | 0.1161 | 0.9559 |
| 0.1701 | 3.0 | 285 | 0.0866 | 0.9689 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Anjoe/kant-gpt2-large | 2e4fda374a8d2bc2b113310efaf19a77d9d65461 | 2022-07-21T14:32:36.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | Anjoe | null | Anjoe/kant-gpt2-large | 72 | null | transformers | 5,336 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: kant-gpt2-large
results: []
---
# kant-gpt2-large
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large). It was trained on the "Akademie Ausgabe" of the works of Immanuel Kant.
It achieves the following results on the evaluation set:
- Loss: 3.4257
## Model description
A large version of gpt2
## Intended uses & limitations
It could be used for the analysis of knowledge representation in and extraction from large language models
## Training and evaluation data
Akademie Ausgabe Immanuel Kant
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.4094 | 1.0 | 11308 | 3.3838 |
| 3.0445 | 2.0 | 22616 | 3.3107 |
| 2.7161 | 3.0 | 33924 | 3.3409 |
| 2.4793 | 4.0 | 45232 | 3.4257 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
s3h/arabic-token-ged-arabert | 6407a502bd65f8564c68094fe62613db195fa1c7 | 2022-07-01T15:04:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | s3h | null | s3h/arabic-token-ged-arabert | 72 | null | transformers | 5,337 | Entry not found |
brjezierski/bert-to-gpt2-german-to-easy-german | 39fd188fbf78a8b3531f47a210f95557c82d8e87 | 2022-07-13T22:47:58.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | brjezierski | null | brjezierski/bert-to-gpt2-german-to-easy-german | 72 | null | transformers | 5,338 | Entry not found |
DTAI-KULeuven/robbertje-1-gb-bort | 8a8832a3545206b8efb64db369f15d06f4eff0ac | 2022-02-24T09:57:08.000Z | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | DTAI-KULeuven | null | DTAI-KULeuven/robbertje-1-gb-bort | 71 | null | transformers | 5,339 | ---
language: "nl"
thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png"
tags:
- Dutch
- Flemish
- RoBERTa
- RobBERT
- RobBERTje
license: mit
datasets:
- oscar
- oscar (NL)
- dbrd
- lassy-ud
- europarl-mono
- conll2002
widget:
- text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."
---
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-merged) |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | this model |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
|
Geotrend/distilbert-base-ur-cased | 2d74a893b996945026a25aa41ac3a4427b5341b2 | 2021-08-16T13:24:21.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"ur",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-ur-cased | 71 | null | transformers | 5,340 | ---
language: ur
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-ur-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ur-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-ur-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Helsinki-NLP/opus-mt-et-de | d3e6e2fd83bc8b61639fc47ab249dfdd5d981050 | 2021-09-09T21:45:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"et",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-et-de | 71 | null | transformers | 5,341 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-et-de
* source languages: et
* target languages: de
* OPUS readme: [et-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.et.de | 22.4 | 0.474 |
|
Helsinki-NLP/opus-mt-zh-nl | 51af542ab7955009cb30a4759a7bdd9db6a31f9d | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-nl | 71 | null | transformers | 5,342 | ---
language:
- zh
- nl
tags:
- translation
license: apache-2.0
---
### zho-nld
* source group: Chinese
* target group: Dutch
* OPUS readme: [zho-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hani cmn_Hira cmn_Kana cmn_Latn
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.nld | 31.5 | 0.525 |
### System Info:
- hf_name: zho-nld
- source_languages: zho
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'nl']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: nld
- short_pair: zh-nl
- chrF2_score: 0.525
- bleu: 31.5
- brevity_penalty: 0.9309999999999999
- ref_len: 13575.0
- src_name: Chinese
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: nl
- prefer_old: False
- long_pair: zho-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Lowin/chinese-bigbird-wwm-base-4096 | 5a7324c571df27341d5fdf571d2a4b6a6470d1c2 | 2021-11-24T15:58:17.000Z | [
"pytorch",
"big_bird",
"fill-mask",
"zh",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Lowin | null | Lowin/chinese-bigbird-wwm-base-4096 | 71 | 1 | transformers | 5,343 | ---
language:
- zh
license:
- apache-2.0
---
```python
from transformers import BertTokenizer
from transformers import BigBirdModel
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096')
tokenizer = BertTokenizer.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096')
```
https://github.com/LowinLi/chinese-bigbird |
Rolv-Arild/xls-r-300m-npsc-seq2seq | 6d3acfc42af6a610f154a8dfe36050c9b0fd93bb | 2022-02-18T18:51:44.000Z | [
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"model-index"
] | automatic-speech-recognition | false | Rolv-Arild | null | Rolv-Arild/xls-r-300m-npsc-seq2seq | 71 | null | transformers | 5,344 | ---
tags:
- generated_from_trainer
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2965
- Wer: 0.3144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.888 | 0.51 | 400 | 3.7320 | 0.9440 |
| 3.1636 | 1.02 | 800 | 2.9188 | 1.1916 |
| 2.773 | 1.53 | 1200 | 2.3347 | 1.0134 |
| 0.7198 | 2.04 | 1600 | 0.6678 | 0.4826 |
| 0.5255 | 2.55 | 2000 | 0.4605 | 0.4135 |
| 0.3961 | 3.06 | 2400 | 0.4266 | 0.3955 |
| 0.3424 | 3.57 | 2800 | 0.3786 | 0.3741 |
| 0.3858 | 4.08 | 3200 | 0.3161 | 0.3552 |
| 0.3218 | 4.59 | 3600 | 0.3029 | 0.3510 |
| 0.199 | 5.1 | 4000 | 0.2988 | 0.3418 |
| 0.2054 | 5.61 | 4400 | 0.2873 | 0.3434 |
| 0.1704 | 6.12 | 4800 | 0.3129 | 0.3432 |
| 0.1805 | 6.63 | 5200 | 0.2963 | 0.3413 |
| 0.2091 | 7.14 | 5600 | 0.2755 | 0.3329 |
| 0.1971 | 7.65 | 6000 | 0.2706 | 0.3309 |
| 0.1237 | 8.16 | 6400 | 0.2823 | 0.3270 |
| 0.123 | 8.67 | 6800 | 0.2754 | 0.3246 |
| 0.103 | 9.18 | 7200 | 0.2917 | 0.3272 |
| 0.1143 | 9.69 | 7600 | 0.2885 | 0.3305 |
| 0.156 | 10.2 | 8000 | 0.2810 | 0.3288 |
| 0.167 | 10.71 | 8400 | 0.2689 | 0.3232 |
| 0.0815 | 11.22 | 8800 | 0.2899 | 0.3236 |
| 0.0844 | 11.73 | 9200 | 0.2798 | 0.3225 |
| 0.0775 | 12.24 | 9600 | 0.2894 | 0.3224 |
| 0.0677 | 12.75 | 10000 | 0.2838 | 0.3204 |
| 0.1383 | 13.27 | 10400 | 0.2959 | 0.3211 |
| 0.1233 | 13.77 | 10800 | 0.2922 | 0.3213 |
| 0.0688 | 14.29 | 11200 | 0.2903 | 0.3209 |
| 0.0655 | 14.8 | 11600 | 0.2868 | 0.3182 |
| 0.0449 | 15.31 | 12000 | 0.2959 | 0.3172 |
| 0.0421 | 15.82 | 12400 | 0.2966 | 0.3180 |
| 0.0858 | 16.33 | 12800 | 0.2941 | 0.3164 |
| 0.0859 | 16.84 | 13200 | 0.2980 | 0.3165 |
| 0.0561 | 17.35 | 13600 | 0.2965 | 0.3165 |
| 0.0506 | 17.86 | 14000 | 0.2935 | 0.3148 |
| 0.0312 | 18.37 | 14400 | 0.2964 | 0.3154 |
| 0.0403 | 18.88 | 14800 | 0.2967 | 0.3160 |
| 0.0924 | 19.39 | 15200 | 0.2955 | 0.3147 |
| 0.0585 | 19.9 | 15600 | 0.2965 | 0.3144 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.18.1
- Tokenizers 0.11.0
|
SEBIS/code_trans_t5_base_commit_generation | 1a568af651d14ea287897f8507cfbfb65959f39b | 2021-06-23T04:56:59.000Z | [
"pytorch",
"jax",
"t5",
"feature-extraction",
"transformers",
"summarization"
] | summarization | false | SEBIS | null | SEBIS/code_trans_t5_base_commit_generation | 71 | null | transformers | 5,345 | ---
tags:
- summarization
widget:
- text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
---
# CodeTrans model for git commit message generation
Pretrained model on git commit using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on Git Commit Message Generation dataset.
## Intended uses & limitations
The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate git commit message using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_commit_generation"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_commit_generation", skip_special_tokens=True),
device=0
)
tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/commit%20generation/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Java |
| -------------------- | :------------: |
| CodeTrans-ST-Small | 39.61 |
| CodeTrans-ST-Base | 38.67 |
| CodeTrans-TF-Small | 44.22 |
| CodeTrans-TF-Base | 44.17 |
| CodeTrans-TF-Large | **44.41** |
| CodeTrans-MT-Small | 36.17 |
| CodeTrans-MT-Base | 39.25 |
| CodeTrans-MT-Large | 41.18 |
| CodeTrans-MT-TF-Small | 43.96 |
| CodeTrans-MT-TF-Base | 44.19 |
| CodeTrans-MT-TF-Large | 44.34 |
| State of the art | 32.81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
abhilash1910/french-roberta | 2358f6784bcec544c1b00598c8fc8631036384c3 | 2021-09-14T07:17:21.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"fr",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | abhilash1910 | null | abhilash1910/french-roberta | 71 | null | transformers | 5,346 | # Roberta Trained Model For Masked Language Model On French Corpus :robot:
This is a Masked Language Model trained with [Roberta](https://huggingface.co/transformers/model_doc/roberta.html) on a small French News Corpus(Leipzig corpora).
The model is built using Huggingface transformers.
The model can be found at :[French-Roberta](https://huggingface.co/abhilash1910/french-roberta)
## Specifications
The corpus for training is taken from Leipzig Corpora (French News) , and is trained on a small set of the corpus (300K).
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=32000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
This is trained by using RobertaConfig from transformers package.The total training parameters :68124416
The model is trained for 100 epochs with a gpu batch size of 64 units.
More details for building custom models can be found at the [HuggingFace Blog](https://huggingface.co/blog/how-to-train)
## Usage Specifications
For using this model, we have to first import AutoTokenizer and AutoModelWithLMHead Modules from transformers
After that we have to specify, the pre-trained model,which in this case is 'abhilash1910/french-roberta' for the tokenizers and the model.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("abhilash1910/french-roberta")
model = AutoModelWithLMHead.from_pretrained("abhilash1910/french-roberta")
```
After this the model will be downloaded, it will take some time to download all the model files.
For testing the model, we have to import pipeline module from transformers and create a masked output model for inference as follows:
```python
from transformers import pipeline
model_mask = pipeline('fill-mask', model='abhilash1910/french-roberta')
model_mask("Le tweet <mask>.")
```
Some of the examples are also provided with generic French sentences:
Example 1:
```python
model_mask("À ce jour, <mask> projet a entraîné")
```
Output:
```bash
[{'sequence': '<s>À ce jour, belles projet a entraîné</s>',
'score': 0.18685665726661682,
'token': 6504,
'token_str': 'Ġbelles'},
{'sequence': '<s>À ce jour,- projet a entraîné</s>',
'score': 0.0005200508167035878,
'token': 17,
'token_str': '-'},
{'sequence': '<s>À ce jour, de projet a entraîné</s>',
'score': 0.00045729897101409733,
'token': 268,
'token_str': 'Ġde'},
{'sequence': '<s>À ce jour, du projet a entraîné</s>',
'score': 0.0004307595663703978,
'token': 326,
'token_str': 'Ġdu'},
{'sequence': '<s>À ce jour," projet a entraîné</s>',
'score': 0.0004219160182401538,
'token': 6,
'token_str': '"'}]
```
Example 2:
```python
model_mask("C'est un <mask>")
```
Output:
```bash
[{'sequence': "<s>C'est un belles</s>",
'score': 0.16440927982330322,
'token': 6504,
'token_str': 'Ġbelles'},
{'sequence': "<s>C'est un de</s>",
'score': 0.0005495127406902611,
'token': 268,
'token_str': 'Ġde'},
{'sequence': "<s>C'est un du</s>",
'score': 0.00044988933950662613,
'token': 326,
'token_str': 'Ġdu'},
{'sequence': "<s>C'est un-</s>",
'score': 0.00044542422983795404,
'token': 17,
'token_str': '-'},
{'sequence': "<s>C'est un </s>",
'score': 0.00037563967634923756,
'token': 202,
'token_str': 'ĉ'}]
```
## Resources
For all resources , please look into the [HuggingFace](https://huggingface.co/) Site and the [Repositories](https://github.com/huggingface).
---
language:
- fr
tags:
- fill-mask
license: apache-2.0
---
|
abhiramtirumala/DialoGPT-sarcastic | 796ca7306a806583428e40747632577e5db932bc | 2021-06-30T19:52:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | abhiramtirumala | null | abhiramtirumala/DialoGPT-sarcastic | 71 | 4 | transformers | 5,347 |
---
pipeline_tag: conversational
---
This model is a fine-tuned version of Microsoft/DialoGPT-medium trained to created sarcastic responses from the dataset "Sarcasm on Reddit" located [here](https://www.kaggle.com/danofer/sarcasm). |
abinayam/gpt-2-tamil | 752d5c1069d9ae7b43019bd280300950f599c7e8 | 2021-07-23T06:24:40.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"ta",
"dataset:oscar",
"dataset:IndicNLP",
"transformers"
] | text-generation | false | abinayam | null | abinayam/gpt-2-tamil | 71 | 2 | transformers | 5,348 | ---
language: ta
datasets:
- oscar
- IndicNLP
widget:
- text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு'
---
# GPT2-Tamil
This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.
## Setup:
To setup the project, run the following command,
```python
pip install -r requirements.txt
```
## Model:
Pretrained model on Tamil language using a causal language modeling (CLM) objective.
## Dataset Used:
The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/)
## Intended uses & limitations:
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
## How to pretrain the model:
To perform training, do the following steps,
- Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.)
```python
>>> export MODEL_DIR=<model_dir>
```
- Create the config.json by running the following command,
```python
>>> python src/create_config.py
```
- Create the tokenizer by running the following command,
```python
>>> python src/train_tokenizer.py
```
- Once the config and tokenizer is created, run the following script to start training the flax model
```python
>>> python scripts/train_gpt2-oscar-tamil.sh
```
## How to use:
To perform language generation using the model, pipeline can be used directly.
- First convert the flax model to pytorch using the following command,
```python
python src/convert_flax_to_pytorch.py
```
- Use the following snippet to perform language generation,
```python
>>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
>>> model_name = 'abinayam/gpt-2-tamil'
>>> model = AutoModelWithLMHead.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> set_seed(42)
>>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு"
>>> max_len = 300
>>> no_seq = 5
>>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
>>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq)
```
|
airesearch/wangchanberta-base-wiki-newmm | 840fd2896fd1a23f9f6366ab458863bdc4e921f8 | 2021-09-11T09:39:18.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"transformers",
"autotrain_compatible"
] | fill-mask | false | airesearch | null | airesearch/wangchanberta-base-wiki-newmm | 71 | null | transformers | 5,349 | ---
language: th
---
# WangchanBERTa base model: `wangchanberta-base-wiki-newmm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-newmm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use wordl-level token from [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer namedly `newmm`. The total number of word-level tokens in the vocabulary is 97,982.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
davanstrien/iiif_manuscript_vit | 37b1ed562376f16bb2dce761dcfd57fc582ba047 | 2022-02-10T22:49:42.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | davanstrien | null | davanstrien/iiif_manuscript_vit | 71 | null | transformers | 5,350 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: iiif_manuscript_vit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iiif_manuscript_vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- F1: 0.5996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5639 | 1.0 | 2269 | 0.5822 | 0.5516 |
| 0.5834 | 2.0 | 4538 | 0.5825 | 0.5346 |
| 0.5778 | 3.0 | 6807 | 0.5794 | 0.6034 |
| 0.5735 | 4.0 | 9076 | 0.5742 | 0.5713 |
| 0.5731 | 5.0 | 11345 | 0.5745 | 0.6008 |
| 0.5701 | 6.0 | 13614 | 0.5729 | 0.5499 |
| 0.5696 | 7.0 | 15883 | 0.5717 | 0.5952 |
| 0.5683 | 8.0 | 18152 | 0.5680 | 0.6005 |
| 0.5648 | 9.0 | 20421 | 0.5679 | 0.5967 |
| 0.564 | 10.0 | 22690 | 0.5684 | 0.5996 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
dvm1983/TinyBERT_General_4L_312D_de | 3b3084a67cb7894f26fbc0232ce3e8fbc1b61fc9 | 2021-08-22T16:44:48.000Z | [
"pytorch",
"bert",
"de",
"dataset:wiki",
"arxiv:1909.10351",
"transformers",
"tinybert",
"fill-mask"
] | fill-mask | false | dvm1983 | null | dvm1983/TinyBERT_General_4L_312D_de | 71 | null | transformers | 5,351 | ---
language:
- de
tags:
- tinybert
- fill-mask
datasets:
- wiki
---
Here is represented tinybert model for German language (de). The model was created by distilling of bert base cased model(https://huggingface.co/dbmdz/bert-base-german-cased) in the way described in https://arxiv.org/abs/1909.10351 (TinyBERT: Distilling BERT for Natural Language Understanding)
Dataset:
German Wikipedia Text Corpus - https://github.com/t-systems-on-site-services-gmbh/german-wikipedia-text-corpus
Versions:
torch==1.4.0
transformers==4.8.1
How to load model for LM(fill-mask) task:
tokenizer = transformers.BertTokenizer.from_pretrained(model_dir + '/vocab.txt', do_lower_case=False)
config = transformers.BertConfig.from_json_file(model_dir+'config.json')
model = transformers.BertModel(config=config)
model.pooler = nn.Sequential(nn.Linear(in_features=model.config.hidden_size, out_features=model.config.hidden_size, bias=True),
nn.LayerNorm((model.config.hidden_size,), eps=1e-12, elementwise_affine=True),
nn.Linear(in_features=model.config.hidden_size, out_features=len(tokenizer), bias=True))
model.resize_token_embeddings(len(tokenizer))
checkpoint = torch.load(model_dir+'/pytorch_model.bin', map_location=torch.device('cuda'))
model.load_state_dict(checkpoint)
In case of NER or Classification task we have to load model for LM task and change pooler:
model.pooler = nn.Sequential(nn.Dropout(p=config.hidden_dropout_prob, inplace=False),
nn.Linear(in_features=config.hidden_size, out_features=n_classes, bias=True))
|
godiec/diam | a83df7f3dc0379b2f64317b1dc0c757a40018053 | 2021-12-13T19:12:32.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | godiec | null | godiec/diam | 71 | null | transformers | 5,352 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: diam
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9775280952453613
---
# diam
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bunny

#### moon

#### sun

#### tiger
 |
harr/distilbert-base-uncased-finetuned-ingredients | 4d043103a8ee532364bf569c53e4a06c2eb6d5c5 | 2021-09-11T09:20:35.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:ingredients_yes_no",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | harr | null | harr/distilbert-base-uncased-finetuned-ingredients | 71 | 3 | transformers | 5,353 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ingredients_yes_no
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ingredients
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ingredients_yes_no
type: ingredients_yes_no
args: IngredientsYesNo
metrics:
- name: Precision
type: precision
value: 0.9898648648648649
- name: Recall
type: recall
value: 0.9932203389830508
- name: F1
type: f1
value: 0.9915397631133671
- name: Accuracy
type: accuracy
value: 0.9978308026030369
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ingredients
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ingredients_yes_no dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0105
- Precision: 0.9899
- Recall: 0.9932
- F1: 0.9915
- Accuracy: 0.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 47 | 0.2783 | 0.4 | 0.5492 | 0.4629 | 0.8910 |
| No log | 2.0 | 94 | 0.1089 | 0.8145 | 0.8780 | 0.8450 | 0.9718 |
| No log | 3.0 | 141 | 0.0273 | 0.9865 | 0.9932 | 0.9899 | 0.9973 |
| No log | 4.0 | 188 | 0.0168 | 0.9865 | 0.9932 | 0.9899 | 0.9973 |
| No log | 5.0 | 235 | 0.0156 | 0.9865 | 0.9898 | 0.9882 | 0.9957 |
| No log | 6.0 | 282 | 0.0129 | 0.9865 | 0.9932 | 0.9899 | 0.9973 |
| No log | 7.0 | 329 | 0.0121 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
| No log | 8.0 | 376 | 0.0115 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
| No log | 9.0 | 423 | 0.0108 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
| No log | 10.0 | 470 | 0.0105 | 0.9899 | 0.9932 | 0.9915 | 0.9978 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ml6team/gpt2-medium-dutch-finetune-oscar | 7ae5ea65cb2d434d07da0f3628c0738d8ee5fef5 | 2021-05-23T09:42:53.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"nl",
"transformers",
"adaption",
"recycled",
"gpt2-medium"
] | text-generation | false | ml6team | null | ml6team/gpt2-medium-dutch-finetune-oscar | 71 | 6 | transformers | 5,354 | ---
language: nl
widget:
- text: "De regering heeft beslist dat"
tags:
- adaption
- recycled
- gpt2-medium
- gpt2
pipeline_tag: text-generation
---
# Dutch finetuned GPT2 |
nateraw/baseball-stadium-foods | 1252d68fce7e2a3e3855b43439992beccea3f716 | 2021-06-30T07:11:21.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/baseball-stadium-foods | 71 | null | transformers | 5,355 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: baseball-stadium-foods
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9107142686843872
---
# baseball-stadium-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cotton candy

#### hamburger

#### hot dog

#### nachos

#### popcorn
 |
nateraw/donut-or-bagel | 408739a81234d039cbead3c0f956ef1f729a4739 | 2021-07-10T19:54:49.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | nateraw | null | nateraw/donut-or-bagel | 71 | null | transformers | 5,356 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: donut-or-bagel
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# donut-or-bagel
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bagel

#### donut
 |
nateraw/planes-trains-automobiles | dcf495d94e1cb0e9e7fb6bf8ac3c05c74dc3c8df | 2021-08-23T21:42:21.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"generated_from_trainer",
"license:apache-2.0"
] | image-classification | false | nateraw | null | nateraw/planes-trains-automobiles | 71 | null | transformers | 5,357 | ---
license: apache-2.0
tags:
- huggingpics
- image-classification
- generated_from_trainer
metrics:
- accuracy
model_index:
- name: planes-trains-automobiles
results:
- task:
name: Image Classification
type: image-classification
metric:
name: Accuracy
type: accuracy
value: 0.9850746268656716
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# planes-trains-automobiles
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the huggingpics dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0534
- Accuracy: 0.9851
## Model description
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### automobiles

#### planes

#### trains

## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0283 | 1.0 | 48 | 0.0434 | 0.9851 |
| 0.0224 | 2.0 | 96 | 0.0548 | 0.9851 |
| 0.0203 | 3.0 | 144 | 0.0445 | 0.9851 |
| 0.0195 | 4.0 | 192 | 0.0534 | 0.9851 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
sonoisa/vl-t5-base-japanese | d5d3c72dcbad8ebb55e51ffe7ede7ba786edb5cd | 2021-10-04T11:13:35.000Z | [
"pytorch",
"t5",
"ja",
"dataset:wikipedia",
"dataset:oscar",
"dataset:cc100",
"dataset:ms_coco",
"dataset:visual_genome",
"dataset:coco_captions",
"dataset:vqa",
"dataset:gqa",
"arxiv:2102.02779",
"transformers",
"vl-t5",
"license:cc-by-sa-4.0"
] | null | false | sonoisa | null | sonoisa/vl-t5-base-japanese | 71 | null | transformers | 5,358 | ---
language: ja
tags:
- vl-t5
license: cc-by-sa-4.0
datasets:
- wikipedia
- oscar
- cc100
- ms_coco
- visual_genome
- coco_captions
- vqa
- gqa
---
# 日本語VL-T5事前学習済みモデル
This is a VL-T5 (Unifying Vision-and-Language Tasks via Text Generation) model pretrained on Japanese corpus.
日本語コーパスを用いて事前学習を行ったVL-T5 (Unifying Vision-and-Language Tasks via Text Generation) モデルです。
- VL-T5の論文: https://arxiv.org/abs/2102.02779
- 推論例 (要Google Colab): https://colab.research.google.com/github/sonoisa/VL-T5-ja/blob/master/日本語VL-T5推論.ipynb
|
suhnylla/planes_airlines | 31689e3a1c78ffce0aebfa030bfa00c28a8eafc8 | 2021-07-22T02:21:24.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | suhnylla | null | suhnylla/planes_airlines | 71 | null | transformers | 5,359 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: planes_airlines
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.32307693362236023
---
# planes_airlines
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### planes cathay pacific

#### planes delta airlines

#### planes malaysia airlines

#### planes singapore airlines

#### planes virgin airlines
 |
hf-internal-testing/tiny-plbart | 4744258777b2b19aab82ccab91cc4904b1f305a9 | 2022-04-05T14:38:10.000Z | [
"pytorch",
"plbart",
"text-classification",
"transformers"
] | text-classification | false | hf-internal-testing | null | hf-internal-testing/tiny-plbart | 71 | null | transformers | 5,360 | Entry not found |
shniranjan/wav2vec2-large-xlsr-300m-nepali | f95476bd5f3981d3684da3245b32334865c1550a | 2022-04-15T02:29:21.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ne",
"transformers",
"speech-to-text"
] | automatic-speech-recognition | false | shniranjan | null | shniranjan/wav2vec2-large-xlsr-300m-nepali | 71 | null | transformers | 5,361 | ## Usage
The model can be used directly (without a language model) as follows:
---
language:
- ne
tags:
- speech-to-text
---
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("shniranjan/wav2vec2-large-xlsr-300m-nepali")
model = Wav2Vec2ForCTC.from_pretrained("shniranjan/wav2vec2-large-xlsr-300m-nepali")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
``` |
zzzzzzttt/swin-tiny-patch4-window7-224-finetuned-eurosat | d6d2e6689168ae3466defaaf0020a46b334d76d8 | 2022-04-14T12:20:10.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | zzzzzzttt | null | zzzzzzttt/swin-tiny-patch4-window7-224-finetuned-eurosat | 71 | null | transformers | 5,362 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9762962962962963
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0654
- Accuracy: 0.9763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2431 | 1.0 | 190 | 0.1119 | 0.9607 |
| 0.1682 | 2.0 | 380 | 0.0921 | 0.9693 |
| 0.1644 | 3.0 | 570 | 0.0654 | 0.9763 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mirbostani/bert-base-uncased-finetuned-newsqa | 4b47302119d59350e95a2f5c6d4aee61dde202e8 | 2022-04-25T21:01:37.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:newsqa",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | mirbostani | null | mirbostani/bert-base-uncased-finetuned-newsqa | 71 | null | transformers | 5,363 | ---
language:
- en
tags:
- question-answering
license: apache-2.0
datasets:
- newsqa
metrics:
- f1
- exact_match
---
# BERT Base Uncased Finetuned on NewsQA
Examples with `noAnswer` and `badQuestion` are not included in the training process.
```shell
$ cd ~/projects/transformers/examples/legacy/question-answering
$ mkdir bert_base_uncased_finetuned_newsqa
$ python run_newsqa.py \
--model_type bert \
--model_name_or_path "bert-base-uncased" \
--do_train \
--do_eval \
--do_lower_case \
--num_train_epochs 2 \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 32 \
--max_seq_length 384 \
--max_grad_norm inf \
--doc_stride 128 \
--train_file "~/projects/data/newsqa/combined-newsqa-data-v1.json" \
--predict_file "~/projects/data/newsqa/combined-newsqa-data-v1.json" \
--output_dir "./bert_base_uncased_finetuned_newsqa" \
--save_steps 20000
```
Results:
```shell
{'exact': 60.19350380096752, 'f1': 73.29371985128037, 'total': 4341, 'HasAns_exact': 60.19350380096752, 'HasAns_f1': 73.29371985128037, 'HasAns_total': 4341, 'best_exact': 60.19350380096752, 'best_exact_thresh': 0.0, 'best_f1': 73.29371985128037, 'best_f1_thresh': 0.0}
```
To prepare the database, follow the instructions on the [NewsQA](https://github.com/Maluuba/newsqa) repository.
|
beomi/KcELECTRA-small-v2022 | d4f840c28ae2cc26b7639c7ced8ffa61169f4607 | 2022-04-27T05:48:25.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers",
"license:mit"
] | null | false | beomi | null | beomi/KcELECTRA-small-v2022 | 71 | 2 | transformers | 5,364 | ---
license: mit
---
|
mbyanfei/swin-tiny-patch4-window7-224-finetuned-eurosat | f7868f313d240691001ebd43dce8f831a64283e1 | 2022-05-27T18:43:27.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | mbyanfei | null | mbyanfei/swin-tiny-patch4-window7-224-finetuned-eurosat | 71 | null | transformers | 5,365 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9881481481481481
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0508
- Accuracy: 0.9881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2241 | 1.0 | 1518 | 0.0886 | 0.9719 |
| 0.082 | 2.0 | 3036 | 0.0705 | 0.9815 |
| 0.101 | 3.0 | 4554 | 0.0508 | 0.9881 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
fxtentacle/wav2vec2-xls-r-1b-tevr | 7accec19468fc64f5ea54c11d8bab80342bc29f3 | 2022-06-28T16:22:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"de",
"dataset:common_voice",
"arxiv:2206.12693",
"transformers",
"audio",
"speech",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | fxtentacle | null | fxtentacle/wav2vec2-xls-r-1b-tevr | 71 | 4 | transformers | 5,366 | ---
language: de
datasets:
- common_voice
inference: false
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec 2.0 XLS-R 1B + TEVR tokens + 5-gram LM by Hajo Nils Krabbenhöft
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 3.6433399042523233
- name: Test CER
type: cer
value: 1.5398893560981173
---
## Overview
This folder contains a fully trained German speech recognition pipeline
consisting of an acoustic model using the new wav2vec 2.0 XLS-R 1B **TEVR** architecture
and a 5-gram KenLM language model.
For an explanation of the TEVR enhancements and their motivation, please see our paper:
[TEVR: Improving Speech Recognition by Token Entropy Variance Reduction](https://arxiv.org/abs/2206.12693).
[](https://paperswithcode.com/sota/speech-recognition-on-common-voice-german?p=tevr-improving-speech-recognition-by-token)
This pipeline scores a very competitive (as of June 2022) **word error rate of 3.64%** on CommonVoice German.
The character error rate was 1.54%.
## Citation
If you use this ASR pipeline for research, please cite:
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.12693,
doi = {10.48550/ARXIV.2206.12693},
url = {https://arxiv.org/abs/2206.12693},
author = {Krabbenhöft, Hajo Nils and Barth, Erhardt},
keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, F.2.1; I.2.6; I.2.7},
title = {TEVR: Improving Speech Recognition by Token Entropy Variance Reduction},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## TEVR Tokenizer Creation / Testing
See https://huggingface.co/fxtentacle/tevr-token-entropy-predictor-de for:
- our trained ByT5 model used to calculate the entropies in the paper
- a Jupyter Notebook to generate a TEVR Tokenizer from a text corpus
- a Jupyter Notebook to generate the illustration image in the paper
## Evaluation
To evalue this pipeline yourself and/or on your own data, see the `HF Eval Script.ipynb` Jupyter Notebook
or use the following python script:
```python
!pip install --quiet --root-user-action=ignore --upgrade pip
!pip install --quiet --root-user-action=ignore "datasets>=1.18.3" "transformers==4.11.3" librosa jiwer huggingface_hub
!pip install --quiet --root-user-action=ignore https://github.com/kpu/kenlm/archive/master.zip pyctcdecode
!pip install --quiet --root-user-action=ignore --upgrade transformers
!pip install --quiet --root-user-action=ignore torch_audiomentations audiomentations
```
```python
from datasets import load_dataset, Audio, load_metric
from transformers import AutoModelForCTC, Wav2Vec2ProcessorWithLM
import torchaudio.transforms as T
import torch
import unicodedata
import numpy as np
import re
# load testing dataset
testing_dataset = load_dataset("common_voice", "de", split="test")
# replace invisible characters with space
allchars = list(set([c for t in testing_dataset['sentence'] for c in list(t)]))
map_to_space = [c for c in allchars if unicodedata.category(c)[0] in 'PSZ' and c not in 'ʻ-']
replacements = ''.maketrans(''.join(map_to_space), ''.join(' ' for i in range(len(map_to_space))), '\'ʻ')
def text_fix(text):
# change ß to ss
text = text.replace('ß','ss')
# convert dash to space and remove double-space
text = text.replace('-',' ').replace(' ',' ').replace(' ',' ')
# make lowercase
text = text.lower()
# remap all invisible characters to space
text = text.translate(replacements).strip()
# for easier comparison to Zimmermeister, replace unrepresentable characters with ?
text = re.sub("[âşěýňעảנźțãòàǔł̇æồאắîשðșęūāñë生בøúıśžçćńřğ]+","?",text)
# remove multiple spaces (again)
text = ' '.join([w for w in text.split(' ') if w != ''])
return text
# load model
model = AutoModelForCTC.from_pretrained("fxtentacle/wav2vec2-xls-r-1b-tevr")
model.to('cuda')
# load processor
class HajoProcessor(Wav2Vec2ProcessorWithLM):
@staticmethod
def get_missing_alphabet_tokens(decoder, tokenizer):
return []
processor = HajoProcessor.from_pretrained("fxtentacle/wav2vec2-xls-r-1b-tevr")
# this function will be called for each WAV file
def predict_single_audio(batch, image=False):
audio = batch['audio']['array']
# resample, if needed
if batch['audio']['sampling_rate'] != 16000:
audio = T.Resample(orig_freq=batch['audio']['sampling_rate'], new_freq=16000)(torch.from_numpy(audio)).numpy()
# normalize
audio = (audio - audio.mean()) / np.sqrt(audio.var() + 1e-7)
# ask HF processor to prepare audio for GPU eval
input_values = processor(audio, return_tensors="pt", sampling_rate=16_000).input_values
# call model on GPU
with torch.no_grad():
logits = model(input_values.to('cuda')).logits.cpu().numpy()[0]
# ask HF processor to decode logits
decoded = processor.decode(logits, beam_width=500)
# return as dictionary
return { 'groundtruth': text_fix(batch['sentence']), 'prediction': decoded.text }
# process all audio files
all_predictions = testing_dataset.map(predict_single_audio, remove_columns=testing_dataset.column_names)
# print results
print('WER', load_metric("wer").compute(predictions=all_predictions['prediction'], references=all_predictions['groundtruth'])*100.0, '%')
print('CER', load_metric("cer").compute(predictions=all_predictions['prediction'], references=all_predictions['groundtruth'])*100.0, '%')
```
WER 3.6433399042523233 %
CER 1.5398893560981173 %
|
Jihuai/bert-ancient-chinese | fd2d21041bf427d78405f6f9320478fae7710b54 | 2022-06-09T11:53:34.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"transformers",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Jihuai | null | Jihuai/bert-ancient-chinese | 71 | 0 | transformers | 5,367 | ---
language:
- "zh"
tags:
- "chinese"
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "bert"
- "pytorch"
inference: false
license: "apache-2.0"
---
# bert-ancient-chinese
## Introduction
With the current wave of Artificial Intelligence and Digital Humanities sweeping the world, the automatic analysis of modern Chinese has achieved great results. However, the automatic analysis and research of ancient Chinese is relatively weak, and it is difficult to meet the actual needs of Sinology, history, philology, Chinese history and the education of Sinology and traditional culture. There are many controversies about characters, words and parts of speech in ancient Chinese, and there are many difficulties in resource construction. Digital Humanities research requires large-scale corpora and high-performance ancient natural language processing tools. In view of the fact that pre-trained language models have greatly improved the accuracy of text mining in English and modern Chinese texts, there is an urgent need for pre-trained models for the automatic processing of ancient texts.
In 2022, we took part in **[EvaHan 2022](https://circse.github.io/LT4HALA/2022/EvaHan)**, the first NLP tool evaluation competition in the field of ancient Chinese. **`bert-ancient-chinese`** is trained to further optimize the model effect in open environment.
You can view the introduction of the Chinese version through [this link](https://github.com/Jihuai-wpy/bert-ancient-chinese).
## Further Pre-training
**Compared with the previous pre-trained models, `bert-ancient-chinese` mainly has the following characteristics:**
- Ancient Chinese texts mostly appear in traditional Chinese characters and contain a large number of uncommon Chinese characters, which makes the `vocab table` (vocabulary) of the pre-trained model without some uncommon Chinese characters. `bert-ancient-chinese` further expands the `vocab` (dictionary) of the pre-trained model by learning in a large-scale corpus. The final `vocab table` size is **38208**, compared to `bert-base-chinese` vocabulary size of **21128**, `siku-bert` vocabulary size of **29791**, `bert-ancient-chinese` has a **larger vocabulary**, and also includes more uncommon vocabulary word, which is more conducive to improving the performance of the model in downstream tasks. The `vocab table` is the vocabulary table, which is included in the `vocab.txt` in the pre-trained model.
- `bert-ancient-chinese` uses a larger training set. Compared with `siku-bert` only using `"Siku Quanshu"` as training dataset, we use a larger-scale dataset (about six times that of `"Siku Quanshu"`), covering from the Ministry of Cong, the Ministry of Taoism, the Ministry of Buddhism, the Ministry of Confucianism, the Ministry of Poetry, the Ministry of History, the Ministry of Medicine, the Ministry of Art, the Ministry of Yi, and the Ministry of Zi, are richer in content and wider in scope than the `"Siku Quanshu"`.
- Based on the idea of `Domain-Adaptive Pretraining`, `bert-ancient-chinese` was trained on the basis of `bert-base-chinese ` and was combined with ancient Chinese corpus to obtain a pre-trained model for the field of automatic processing of ancient Chinese.
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain `bert-ancient-chinese` model online.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Jihuai/bert-ancient-chinese")
model = AutoModel.from_pretrained("Jihuai/bert-ancient-chinese")
```
## Download PTM
The model we provide is the `PyTorch` version.
### From Huggingface
Download directly through Huggingface's official website, and the model on the official website has been updated to the latest version simultaneously:
- **bert-ancient-chinese:[Jihuai/bert-ancient-chinese · Hugging Face](https://huggingface.co/Jihuai/bert-ancient-chinese)**
### From Cloud Disk
Download address:
| Model | Link |
| :------------------: | :----------------------------------------------------------: |
| bert-ancient-chinese | [Link](https://pan.baidu.com/s/1JC5_64gLT07wgG2hjzqxjg ) Extraction code: qs7x |
## Evaluation & Results
We tested and compared different pre-trained models on the training and test sets provided by the competition [EvaHan 2022](https://circse.github.io/LT4HALA/2022/EvaHan). We compare the performance of the models by fine-tuning them on the downstream tasks of `Chinese Word Segmentation(CWS)` and `part-of-speech tagging(POS Tagging)`.
We use `BERT+CRF` as the baseline model to compare the performance of `siku-bert`, `siku-roberta` and `bert-ancient-chinese` on downstream tasks. To fully utilize the entire training dataset, we employ `K-fold cross-validation`, while keeping other hyperparameters the same. The evaluation index is the `F1 value`.
<table>
<tr>
<td></td>
<td colspan="2" align="center"> <i>Zuozhuan</i> </td>
<td colspan="2" align="center"> <i>Shiji</i> </td>
</tr>
<tr>
<td></td>
<td align="center">CWS</td>
<td align="center">POS</td>
<td align="center">CWS</td>
<td align="center">POS</td>
</tr>
<tr>
<td align="center">siku-bert</td>
<td align="center">96.0670%</td>
<td align="center">92.0156%</td>
<td align="center">92.7909%</td>
<td align="center">87.1188%</td>
</tr>
<tr>
<td align="center">siku-roberta</td>
<td align="center">96.0689%</td>
<td align="center">92.0496%</td>
<td align="center">93.0183%</td>
<td align="center">87.5339%</td>
</tr>
<tr>
<td align="center">bert-ancient-chinese</td>
<td align="center"> <b>96.3273%</b> </td>
<td align="center"> <b>92.5027%</b> </td>
<td align="center"> <b>93.2917%</b> </td>
<td align="center"> <b>87.8749%</b> </td>
</tr>
</table>
## Citing
If our content is helpful for your research work, please quote it in the paper.
## Disclaim
The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment. **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
`bert-ancient-chinese` is based on [bert-base-chinese](https://huggingface.co/bert-base-chinese) to continue training.
Thanks to Prof. [Xipeng Qiu](https://xpqiu.github.io/) and the [Natural Language Processing Laboratory of Fudan University](https://nlp.fudan.edu.cn/).
## Contact us
Pengyu Wang:[email protected]
|
lindsayng/t5-base-base-sweep-b3acbf3b | ce971bbd3cd1ab4818a0c1c8bc04ed0fcdf04ff8 | 2022-06-13T14:19:26.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | lindsayng | null | lindsayng/t5-base-base-sweep-b3acbf3b | 71 | null | transformers | 5,368 | Entry not found |
ArnavL/roberta-base-imdb-0 | 887f1f8080ca67925d43c3d81c132ef834bdd2d5 | 2022-07-11T10:46:26.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ArnavL | null | ArnavL/roberta-base-imdb-0 | 71 | null | transformers | 5,369 | Entry not found |
ARTeLab/mbart-summarization-fanpage | 7812dc1714de58152c634f88a19c7eb2a6045e3b | 2022-05-03T06:07:47.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ARTeLab | null | ARTeLab/mbart-summarization-fanpage | 70 | null | transformers | 5,370 | ---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_mbart_fanpage4epoch
results: []
datasets:
- ARTeLab/fanpage
---
# mbart-summarization-fanpage
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.1833
- Rouge1: 36.5027
- Rouge2: 17.4428
- Rougel: 26.1734
- Rougelsum: 30.2636
- Gen Len: 75.2413
## Usage
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-fanpage")
model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-fanpage")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
# Citation
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
AhmedBou/TuniBert | 615e28c7b0bb3c941092293ccd33ca7cd824b627 | 2021-10-05T01:47:35.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | AhmedBou | null | AhmedBou/TuniBert | 70 | null | transformers | 5,371 | Entry not found |
Helsinki-NLP/opus-mt-cpp-en | 523c5f73e933411d9106072f70a53f4f416685cc | 2021-01-18T07:54:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"id",
"cpp",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-cpp-en | 70 | null | transformers | 5,372 | ---
language:
- id
- cpp
- en
tags:
- translation
license: apache-2.0
---
### cpp-eng
* source group: Creoles and pidgins, Portuguese-based
* target group: English
* OPUS readme: [cpp-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md)
* model: transformer
* source language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa-eng.msa.eng | 39.6 | 0.580 |
| Tatoeba-test.multi.eng | 39.7 | 0.580 |
| Tatoeba-test.pap-eng.pap.eng | 49.1 | 0.579 |
### System Info:
- hf_name: cpp-eng
- source_languages: cpp
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['id', 'cpp', 'en']
- src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cpp
- tgt_alpha3: eng
- short_pair: cpp-en
- chrF2_score: 0.58
- bleu: 39.7
- brevity_penalty: 0.972
- ref_len: 37399.0
- src_name: Creoles and pidgins, Portuguese-based
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cpp
- tgt_alpha2: en
- prefer_old: False
- long_pair: cpp-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-fr-ms | 0fd5c97c9aea1f88f99f1636982864a01e57d895 | 2021-01-18T08:45:45.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"ms",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-ms | 70 | null | transformers | 5,373 | ---
language:
- fr
- ms
tags:
- translation
license: apache-2.0
---
### fra-msa
* source group: French
* target group: Malay (macrolanguage)
* OPUS readme: [fra-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.msa | 35.3 | 0.617 |
### System Info:
- hf_name: fra-msa
- source_languages: fra
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ms']
- src_constituents: {'fra'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: msa
- short_pair: fr-ms
- chrF2_score: 0.617
- bleu: 35.3
- brevity_penalty: 0.978
- ref_len: 6696.0
- src_name: French
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: ms
- prefer_old: False
- long_pair: fra-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-vi-it | b505bfc06a5df56401a8206679e920b3898cc004 | 2020-08-21T14:42:51.000Z | [
"pytorch",
"marian",
"text2text-generation",
"vi",
"it",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-vi-it | 70 | null | transformers | 5,374 | ---
language:
- vi
- it
tags:
- translation
license: apache-2.0
---
### vie-ita
* source group: Vietnamese
* target group: Italian
* OPUS readme: [vie-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.ita | 31.2 | 0.548 |
### System Info:
- hf_name: vie-ita
- source_languages: vie
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'it']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: ita
- short_pair: vi-it
- chrF2_score: 0.5479999999999999
- bleu: 31.2
- brevity_penalty: 0.932
- ref_len: 1774.0
- src_name: Vietnamese
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: it
- prefer_old: False
- long_pair: vie-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
OthmaneJ/distil-wav2vec2 | e7d240706c12f07b823716eae6589c79d80ed72f | 2021-08-25T07:59:39.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | OthmaneJ | null | OthmaneJ/distil-wav2vec2 | 70 | 7 | transformers | 5,375 | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
# Distil-wav2vec2
This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 45% times smaller and twice as fast as the original wav2vec2 base model.
# Evaluation results
This model achieves the following results (speed is mesured for a batch size of 64):
|Model| Size| WER Librispeech-test-clean |WER Librispeech-test-other|Speed on cpu|speed on gpu|
|----------| ------------- |-------------|-----------| ------|----|
|Distil-wav2vec2| 197.9 Mb | 0.0983 | 0.2266|0.4006s| 0.0046s|
|wav2vec2-base| 360 Mb | 0.0389 | 0.1047|0.4919s| 0.0082s|
# Usage
notebook (executes seamlessly on google colab) at https://github.com/OthmaneJ/distil-wav2vec2
|
it5/it5-large-news-summarization | 4e7864b2ee439fc04a53d44b93da81a5720094fb | 2022-03-09T07:53:26.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"transformers",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible"
] | summarization | false | it5 | null | it5/it5-large-news-summarization | 70 | null | transformers | 5,376 | ---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
model-index:
- name: it5-large-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.249
name: "Test Rouge1 IlPost"
- type: rouge2
value: 0.102
name: "Test Rouge2 IlPost"
- type: rougeL
value: 0.199
name: "Test RougeL IlPost"
- type: bertscore
value: 0.313
name: "Test BERTScore IlPost"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
- type: rouge1
value: 0.253
name: "Test Rouge1 Fanpage"
- type: rouge2
value: 0.099
name: "Test Rouge2 Fanpage"
- type: rougeL
value: 0.191
name: "Test RougeL Fanpage"
- type: bertscore
value: 0.316
name: "Test BERTScore Fanpage"
args:
- model_type: "dbmdz/bert-base-italian-xxl-uncased"
- lang: "it"
- num_layers: 10
- rescale_with_baseline: True
- baseline_path: "bertscore_baseline_ita.tsv"
co2_eq_emissions:
emissions: "51g"
source: "Google Cloud Platform Carbon Footprint"
training_type: "fine-tuning"
geographical_location: "Eemshaven, Netherlands, Europe"
hardware_used: "1 TPU v3-8 VM"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Large for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-large-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
jiangg/chembert_cased | 6764448f64f869e2698ae20f64437feb9cb12f2c | 2021-08-12T18:25:26.000Z | [
"pytorch",
"transformers"
] | null | false | jiangg | null | jiangg/chembert_cased | 70 | 3 | transformers | 5,377 | This is the pre-trained model presented in [Automated Chemical Reaction Extraction from Scientific Literature](https://pubs.acs.org/doi/pdf/10.1021/acs.jcim.1c00284), which is a BERT model trained on chemical literature data.
The training corpus was taken from ~200K ACS publications, more details can be found in the paper.
If using these models, please cite the following paper:
```latex
@article{guo2021automated,
title={Automated Chemical Reaction Extraction from Scientific Literature},
author={Guo, Jiang and Ibanez-Lopez, A Santiago and Gao, Hanyu and Quach, Victor and Coley, Connor W and Jensen, Klavs F and Barzilay, Regina},
journal={Journal of Chemical Information and Modeling},
year={2021},
publisher={ACS Publications}
}
```
|
lewtun/oz-fauna | cca6e3688e27fb69df3f4dfc91bc8a46a9ce5017 | 2021-07-01T15:25:24.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | lewtun | null | lewtun/oz-fauna | 70 | null | transformers | 5,378 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: oz-fauna
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8571428656578064
---
# oz-fauna
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### dingo

#### koala

#### kookaburra

#### possum

#### tasmanian devil
 |
megagonlabs/bimeanvae-amzn | a5c0af3fe7f313d1b47d6376c1789aea5696e973 | 2021-09-11T00:10:54.000Z | [
"pytorch",
"en",
"transformers",
"summarization",
"license:bsd-3-clause"
] | summarization | false | megagonlabs | null | megagonlabs/bimeanvae-amzn | 70 | null | transformers | 5,379 | ---
language: en
tags:
- summarization
inference: false
license: bsd-3-clause
---
## BiMeanVAE model
See original GitHub repo for more details [here](https://github.com/megagonlabs/coop)
|
ml6team/gpt2-medium-german-finetune-oscar | 80aa19302f16278286d4917d763413d480d1ed21 | 2021-05-23T09:45:30.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"de",
"transformers",
"adaption",
"recycled",
"gpt2-medium"
] | text-generation | false | ml6team | null | ml6team/gpt2-medium-german-finetune-oscar | 70 | 7 | transformers | 5,380 | ---
language: de
widget:
- text: "es wird entschieden, dass es"
tags:
- adaption
- recycled
- gpt2-medium
- gpt2
pipeline_tag: text-generation
---
# German finetuned GPT2 |
othrif/wav2vec2-large-xlsr-moroccan | 198d2b645573e7b2cef5671c15f7f2175e751a36 | 2021-04-15T03:16:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ary",
"dataset:mgb5",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | othrif | null | othrif/wav2vec2-large-xlsr-moroccan | 70 | null | transformers | 5,381 | ---
language: ary
datasets:
- mgb5
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Moroccan Arabic dialect by Othmane Rifki
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: MGB5 from ELDA and https://arabicspeech.org/
type: ELDA and https://arabicspeech.org/
args: ary
metrics:
- name: Test WER
type: wer
value: 66.45
---
# Wav2Vec2-Large-XLSR-53-Moroccan
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [MGB5 Moroccan Arabic](http://www.islrn.org/resources/938-639-614-524-5/) kindly provided by [ELDA](http://www.elra.info/en/about/elda/) and [ArabicSpeech](https://arabicspeech.org/mgb5/).
In order to have access to MGB5, please request it from ELDA.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import re
import torch
import librosa
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
dataset = load_dataset("ma_speech_corpus", split="test")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\'\\�]'
def remove_special_characters(batch):
batch["text"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).lower() + " "
return batch
dataset = dataset.map(remove_special_characters)
dataset = dataset.select(range(10))
def speech_file_to_array_fn(batch):
start, stop = batch['segment'].split('_')
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array, sampling_rate = sf.read(batch["path"], start=int(float(start) * sampling_rate),
stop=int(float(stop) * sampling_rate))
batch["speech"] = librosa.resample(speech_array, sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
batch["target_text"] = batch["text"]
return batch
dataset = dataset.map(
speech_file_to_array_fn
)
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
return batch
dataset = dataset.map(predict, batched=True, batch_size=32)
for reference, predicted in zip(dataset["sentence"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", predicted)
print("--")
```
Here's the output:
```
reference: عشرين ألفريال الوحده وشي خمسميه دريال
predicted: عشرين علف ريا لوحده وشي خمسميات ريال
--
reference: واحد جوج تلاتة ربعه خمسة ستة
predicted: غيحك تويش تتبة نتاست
--
reference: هي هاديك غتجينا تقريبا ميه وسته وعشرين ألف ريال
predicted: ياض كتجينا تقريبه ميه أو ستي و عشيناأفرين
--
reference: ###والصرف ليبقا نجيب بيه الصالون فلهوندا... أهاه نديروها علاش لا؟...
predicted: أواصرف ليبقا نجيب يه اصالون فالهندا أه نديروها علاش لا
--
reference: ###صافي مشات... أنا أختي معندي مندير بهاد صداع الراس...
predicted: صافي مشات أنا خصي معندي مندير بهاد داع راسك
ف
--
reference: خلصو ليا غير لكريدي ديالي وديرو ليعجبكوم
predicted: خلصو ليا غير لكريدي ديالي أوديرو لي عجبكوم
--
reference: أنا نتكلف يلاه لقى شي حاجه نشغل بيها راسي
predicted: أنا نتكلف يالله لقا شي حاجه نشغل بيها راسي
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import re
import torch
import librosa
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
eval_dataset = load_dataset("ma_speech_corpus", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan")
model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\'\\�]'
def remove_special_characters(batch):
batch["text"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).lower() + " "
return batch
eval_dataset = eval_dataset.map(remove_special_characters, remove_columns=["sentence"])
#eval_dataset = eval_dataset.select(range(100))
def speech_file_to_array_fn(batch):
start, stop = batch['segment'].split('_')
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array, sampling_rate = sf.read(batch["path"], start=int(float(start) * sampling_rate),
stop=int(float(stop) * sampling_rate))
batch["speech"] = librosa.resample(speech_array, sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
batch["target_text"] = batch["text"]
return batch
eval_dataset = eval_dataset.map(
speech_file_to_array_fn,
remove_columns=eval_dataset.column_names
)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = eval_dataset.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["target_text"])))
```
**Test Result**: 66.45
## Training
The [MGB5](http://www.islrn.org/resources/938-639-614-524-5/) `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/othrif/xlsr-wav2vec2) |
p208p2002/gpt2-squad-qg-hl | 393382bf4dd5c8ffc6b990c3f2acf9b328af079c | 2021-05-23T10:54:57.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"dataset:squad",
"arxiv:1606.05250",
"arxiv:1705.00106",
"transformers",
"question-generation"
] | text-generation | false | p208p2002 | null | p208p2002/gpt2-squad-qg-hl | 70 | null | transformers | 5,382 | ---
datasets:
- squad
tags:
- question-generation
widget:
- text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]."
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
### Input Example
```
Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL].
```
> # Who wrote Harry Potter?
## Data setting
We report two dataset setting as Follow
### SQuAD
- train: 87599\\\\t
- validation: 10570
> [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### SQuAD NQG
- train: 75722
- dev: 10570
- test: 11877
> [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106)
## Available models
- BART
- GPT2
- T5
## Expriments
We report score with `NQG Scorer` which is using in SQuAD NQG.
If not special explanation, the size of the model defaults to "base".
### SQuAD
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 |
GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 |
T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 |
### SQuAD NQG
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
---------------------------------|------|------|------|------|------|-------|
BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 |
BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 |
GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 |
T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 | |
pszemraj/gpt-neo-tiny-JIBA | 1671445eaa67954f7b22abfdf21c32293aef7a6c | 2022-02-01T23:33:04.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers",
"generated_from_trainer",
"gpt-neo",
"license:apache-2.0"
] | text-generation | false | pszemraj | null | pszemraj/gpt-neo-tiny-JIBA | 70 | null | transformers | 5,383 | ---
license: apache-2.0
tags:
- generated_from_trainer
- gpt-neo
widget:
- text: "waddup bro?\n"
example_title: "waddup"
- text: "Are you going to be on League tonight?\n"
example_title: "League"
- text: "One of my hot takes is that dogs are cute. What do you think?\n"
example_title: "hot take"
- text: "what planet is brandon from?\n"
example_title: "brandon"
- text: "hello there.\n"
example_title: "bold one"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 2
do_sample: True
top_p: 0.97
top_k: 30
repetition_penalty: 5.2
---
# gpt-neo-125M-JIBA_DS-slack_Ep-40_Bs-8
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
- while it would appear that over-fitting is a huge issue, the data is first sorted by the channel and then time, so the test set is a different channel than what is discussed during train and therefore it makes sense that the validation loss for a specific topic would increase during training. This doesn't exclude it from being a problem, but it is not immediately bad.
- this could be mitigated by stratifying the tokenized batches but because there are some intricacies, that was not completed for this MVP. If you are still reading this sentence you can do it though
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 190 | 4.0625 |
| No log | 2.0 | 380 | 4.0195 |
| 3.9459 | 3.0 | 570 | 4.0078 |
| 3.9459 | 4.0 | 760 | 4.0117 |
| 3.9459 | 5.0 | 950 | 4.0352 |
| 3.5297 | 6.0 | 1140 | 4.0625 |
| 3.5297 | 7.0 | 1330 | 4.1094 |
| 3.2215 | 8.0 | 1520 | 4.1680 |
| 3.2215 | 9.0 | 1710 | 4.2305 |
| 3.2215 | 10.0 | 1900 | 4.3047 |
| 2.9058 | 11.0 | 2090 | 4.3906 |
| 2.9058 | 12.0 | 2280 | 4.4844 |
| 2.9058 | 13.0 | 2470 | 4.5977 |
| 2.5865 | 14.0 | 2660 | 4.6992 |
| 2.5865 | 15.0 | 2850 | 4.8125 |
| 2.2434 | 16.0 | 3040 | 4.9258 |
| 2.2434 | 17.0 | 3230 | 5.0391 |
| 2.2434 | 18.0 | 3420 | 5.1562 |
| 1.9185 | 19.0 | 3610 | 5.2773 |
| 1.9185 | 20.0 | 3800 | 5.3789 |
| 1.9185 | 21.0 | 3990 | 5.4961 |
| 1.6238 | 22.0 | 4180 | 5.5977 |
| 1.6238 | 23.0 | 4370 | 5.7109 |
| 1.3409 | 24.0 | 4560 | 5.8164 |
| 1.3409 | 25.0 | 4750 | 5.9023 |
| 1.3409 | 26.0 | 4940 | 5.9961 |
| 1.11 | 27.0 | 5130 | 6.0820 |
| 1.11 | 28.0 | 5320 | 6.1797 |
| 0.9143 | 29.0 | 5510 | 6.2539 |
| 0.9143 | 30.0 | 5700 | 6.3398 |
| 0.9143 | 31.0 | 5890 | 6.4258 |
| 0.7343 | 32.0 | 6080 | 6.5039 |
| 0.7343 | 33.0 | 6270 | 6.5859 |
| 0.7343 | 34.0 | 6460 | 6.6602 |
| 0.5904 | 35.0 | 6650 | 6.7305 |
| 0.5904 | 36.0 | 6840 | 6.7969 |
| 0.4654 | 37.0 | 7030 | 6.8711 |
| 0.4654 | 38.0 | 7220 | 6.9453 |
| 0.4654 | 39.0 | 7410 | 7.0156 |
| 0.3647 | 40.0 | 7600 | 7.0820 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
skimai/spanberta-base-cased-ner-conll02 | dbaa1f489188897b4232c70825cbfa12bba275bb | 2021-05-20T21:50:52.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | skimai | null | skimai/spanberta-base-cased-ner-conll02 | 70 | null | transformers | 5,384 | Entry not found |
uclanlp/plbart-c-cpp-defect-detection | 51fe64169b0e3da9086766747b6be33f523636b9 | 2021-11-09T17:18:32.000Z | [
"pytorch",
"plbart",
"text-classification",
"transformers"
] | text-classification | false | uclanlp | null | uclanlp/plbart-c-cpp-defect-detection | 70 | null | transformers | 5,385 | Entry not found |
youzanai/clip-product-title-chinese | 4bbf81603024c2c2b4f19c4fc2babdf2e1d32679 | 2022-02-09T08:59:51.000Z | [
"pytorch",
"clip_chinese_model",
"transformers"
] | null | false | youzanai | null | youzanai/clip-product-title-chinese | 70 | 5 | transformers | 5,386 | <!--
* @Description:
* @Version:
* @Author: Hardy
* @Date: 2022-02-09 15:13:53
* @LastEditors: Hardy
* @LastEditTime: 2022-02-09 16:59:01
-->
<br />
<p align="center">
<h1 align="center">clip-product-title-chinese</h1>
</p>
## 基于有赞商品图片和标题语料训练的clip模型。
## Usage
使用模型前,请 git clone https://github.com/youzanai/trexpark.git
```python
import torch
from src.clip.clip import ClipProcesserChinese, ClipChineseModel
import requests
from PIL import Image
clip_processor = ClipProcesserChinese.from_pretrained('youzanai/clip-product-title-chinese')
model = ClipChineseModel.from_pretrained('youzanai/clip-product-title-chinese')
url = 'http://img.yzcdn.cn/upload_files/2015/04/21/0140dac4657f874f2acff9294b28088c.jpg'
img = Image.open(requests.get(url, stream=True).raw).convert('RGB')
imgs = [img]
texts = ['运动鞋', '红色连衣裙', '黑色连衣裙', '大衣', '文具']
f = clip_processor(texts, imgs, return_tensors='pt', truncation=True, padding=True)
del f['token_type_ids']
with torch.no_grad():
out = model(**f)
logits_per_image, logits_per_text = out['logits_per_image'], out['logits_per_text']
print(logits_per_image.softmax(dim=-1).cpu().detach().numpy())
# 结果: [[1.1700666e-07 9.9948394e-01 5.1582896e-04 4.7687358e-11 6.9604440e-08]]
```
|
nntadotzips/bert-base-cased-SynonymReplacementMethod_5703sem0of1to1999and5000to7162__8627sem1 | 2631131d1f6e1d982bcbf079d93a91af235b478c | 2022-03-17T10:51:04.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | nntadotzips | null | nntadotzips/bert-base-cased-SynonymReplacementMethod_5703sem0of1to1999and5000to7162__8627sem1 | 70 | null | transformers | 5,387 | Entry not found |
mcsabai/huBert-fine-tuned-hungarian-squadv1 | 4d72ef9cbd028f38d67dc46f70215387e97c3fda | 2022-05-10T10:59:53.000Z | [
"pytorch",
"tf",
"bert",
"question-answering",
"hu",
"transformers",
"autotrain_compatible"
] | question-answering | false | mcsabai | null | mcsabai/huBert-fine-tuned-hungarian-squadv1 | 70 | 1 | transformers | 5,388 | ---
language: hu
thumbnail:
tags:
- question-answering
- bert
widget:
- text: "Melyik folyó szeli ketté Budapestet?"
context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban."
- text: "Mivel juthatunk fel az Óvárosba?"
context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban."
---
## MODEL DESCRIPTION
huBERT base model (cased) fine-tuned on SQuAD v1
- huBert model + Tokenizer: https://huggingface.co/SZTAKI-HLT/hubert-base-cc
- Hungarian SQUAD v1 dataset: Machine Translated SQuAD dataset (Google Translate API)
- This is a demo model. Date of publication: 2022.03.27.
## Model in action
- Fast usage with pipelines:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mcsabai/huBert-fine-tuned-hungarian-squadv1",
tokenizer="mcsabai/huBert-fine-tuned-hungarian-squadv1"
)
predictions = qa_pipeline({
'context': "Anita vagyok és Budapesten élek már több mint 4 éve.",
'question': "Hol lakik Anita?"
})
print(predictions)
# output:
# {'score': 0.9892364144325256, 'start': 16, 'end': 26, 'answer': 'Budapesten'}
```
|
johnnydevriese/vit_beans | 3121791c03bfb93ee61a48d5b995b485e400cb89 | 2022-04-01T02:24:41.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:beans",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | johnnydevriese | null | johnnydevriese/vit_beans | 70 | null | transformers | 5,389 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit_beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9699248120300752
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1176
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.10.3
|
dimbyTa/rock-challenge-DeiT-solo-2 | ae8f0b5b82a70cdc53b49a71f46097ad3354a53e | 2022-04-23T15:54:30.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | dimbyTa | null | dimbyTa/rock-challenge-DeiT-solo-2 | 70 | null | transformers | 5,390 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rock-challenge-DeiT-solo-2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8100078105926514
---
# rock-challenge-DeiT-solo-2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fines

#### large

#### medium

#### pellets
 |
csebuetnlp/banglishbert | 88f2777acd65160b2a6c07e5ffef2d232daadf87 | 2022-05-10T05:13:47.000Z | [
"pytorch",
"electra",
"pretraining",
"bn",
"en",
"arxiv:2101.00204",
"transformers"
] | null | false | csebuetnlp | null | csebuetnlp/banglishbert | 70 | null | transformers | 5,391 | ---
language:
- bn
- en
licenses:
- cc-by-nc-sa-4.0
---
# BanglishBERT
This repository contains the pretrained discriminator checkpoint of the model **BanglishBERT**. This is an [ELECTRA](https://openreview.net/pdf?id=r1xMH1BtvB) discriminator model pretrained with the Replaced Token Detection (RTD) objective on large amounts of Bengali and English corpora. BanglishBERT achieves state-of-the-art **zero-shot cross-lingual transfer** results in many of the NLP tasks in Bengali.
For finetuning on different downstream tasks such as `Sentiment classification`, `Named Entity Recognition`, `Natural Language Inference` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/banglabert).
**Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository uses this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below:
## Using this model as a discriminator in `transformers` (tested on 4.11.0.dev0)
```python
from transformers import AutoModelForPreTraining, AutoTokenizer
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
import torch
model = AutoModelForPreTraining.from_pretrained("csebuetnlp/banglishbert")
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglishbert")
original_sentence = "আমি কৃতজ্ঞ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = "আমি হতাশ কারণ আপনি আমার জন্য অনেক কিছু করেছেন।"
fake_sentence = normalize(fake_sentence) # this normalization step is required before tokenizing the text
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = model(fake_inputs).logits
predictions = torch.round((torch.sign(discriminator_outputs) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
print("\n" + "-" * 50)
[print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()[1:-1]]
print("\n" + "-" * 50)
```
## Benchmarks
* Zero-shot cross-lingual transfer-learning
| Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 27.05 | 62.22 | 39.27 | 59.01/64.18 | 50.35 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 42.03 | 72.18 | 45.37 | 55.03/61.83 | 55.29 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 49.49 | 78.13 | 56.48 | 71.13/77.70 | 66.59 |
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 48.39 | 75.26 | 55.56 | 72.87/78.63 | 66.14 |
* Supervised fine-tuning
| Model | Params | SC (macro-F1) | NLI (accuracy) | NER (micro-F1) | QA (EM/F1) | BangLUE score |
|----------------|-----------|-----------|-----------|-----------|-----------|-----------|
|[mBERT](https://huggingface.co/bert-base-multilingual-cased) | 180M | 67.59 | 75.13 | 68.97 | 67.12/72.64 | 70.29 |
|[XLM-R (base)](https://huggingface.co/xlm-roberta-base) | 270M | 69.54 | 78.46 | 73.32 | 68.09/74.27 | 72.82 |
|[XLM-R (large)](https://huggingface.co/xlm-roberta-large) | 550M | 70.97 | 82.40 | 78.39 | 73.15/79.06 | 76.79 |
|[sahajBERT](https://huggingface.co/neuropark/sahajBERT) | 18M | 71.12 | 76.92 | 70.94 | 65.48/70.69 | 71.03 |
|[BanglishBERT](https://huggingface.co/csebuetnlp/banglishbert) | 110M | 70.61 | 80.95 | 76.28 | 72.43/78.40 | 75.73 |
|[BanglaBERT](https://huggingface.co/csebuetnlp/banglabert) | 110M | 72.89 | 82.80 | 77.78 | 72.63/79.34 | **77.09** |
The benchmarking datasets are as follows:
* **SC:** **[Sentiment Classification](https://aclanthology.org/2021.findings-emnlp.278)**
* **NER:** **[Named Entity Recognition](https://multiconer.github.io/competition)**
* **NLI:** **[Natural Language Inference](https://github.com/csebuetnlp/banglabert/#datasets)**
* **QA:** **[Question Answering](https://github.com/csebuetnlp/banglabert/#datasets)**
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{bhattacharjee-etal-2022-banglabert,
title = {BanglaBERT: Lagnuage Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla},
author = "Bhattacharjee, Abhik and
Hasan, Tahmid and
Mubasshir, Kazi and
Islam, Md. Saiful and
Uddin, Wasi Ahmad and
Iqbal, Anindya and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Findings of the North American Chapter of the Association for Computational Linguistics: NAACL 2022",
month = july,
year = {2022},
url = {https://arxiv.org/abs/2101.00204},
eprinttype = {arXiv},
eprint = {2101.00204}
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
aricibo/swin-tiny-patch4-window7-224-finetuned-eurosat | 611dcdf9bd96368abcf00d9f1a058aa73c861344 | 2022-05-20T07:48:24.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | aricibo | null | aricibo/swin-tiny-patch4-window7-224-finetuned-eurosat | 70 | null | transformers | 5,392 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9725925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- Accuracy: 0.9726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.18 | 1.0 | 190 | 0.0844 | 0.9689 |
| 0.1347 | 2.0 | 380 | 0.0657 | 0.9726 |
| 0.1459 | 3.0 | 570 | 0.0657 | 0.9726 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
schoenml/swin-tiny-patch4-window7-224-finetuned-eurosat | 100220f08c12c20b02f40ec9de2ca6486756b222 | 2022-05-25T15:56:50.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | schoenml | null | schoenml/swin-tiny-patch4-window7-224-finetuned-eurosat | 70 | null | transformers | 5,393 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1551
- eval_accuracy: 0.9474
- eval_runtime: 13.1569
- eval_samples_per_second: 205.216
- eval_steps_per_second: 6.46
- epoch: 1.0
- step: 190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
DaBaap/Chat-Bot-Batman | 5dcf5b5c1043435e7fe25fe75c0bddafb92c96ce | 2022-05-27T23:13:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | DaBaap | null | DaBaap/Chat-Bot-Batman | 70 | null | transformers | 5,394 | ---
tags:
- conversational
--- |
mrm8488/bertin-gpt-j-6B-ES-8bit | d87f4e9ad6d12594788bda91ddeac3c6f8efce21 | 2022-06-03T11:35:42.000Z | [
"pytorch",
"gptj",
"text-generation",
"es",
"arxiv:2106.09685",
"arxiv:2110.02861",
"transformers",
"gpt-j",
"spanish",
"gpt-j-6b",
"license:wtfpl"
] | text-generation | false | mrm8488 | null | mrm8488/bertin-gpt-j-6B-ES-8bit | 70 | 2 | transformers | 5,395 | ---
license: wtfpl
language: es
tags:
- gpt-j
- spanish
- gpt-j-6b
---
# BERTIN-GPT-J-6B with 8-bit weights (Quantized)
This model (and model card) is an adaptation of [hivemind/gpt-j-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit), so all credits to him/her.
This is a version of **bertin-project/bertin-gpt-j-6B** that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**.
Here's how to run it: [](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Where can I train for free?
You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
### Can I use this technique with other models?
The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
### How to use
```sh
wget https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-8bit/resolve/main/utils.py -O Utils.py
pip install transformers
pip install bitsandbytes-cuda111==0.26.0
```
```py
import transformers
import torch
from Utils import GPTJBlock, GPTJForCausalLM
device = 'cuda' if torch.cuda.is_available() else 'cpu'
transformers.models.gptj.modeling_gptj.GPTJBlock = GPTJBlock # monkey-patch GPT-J
tokenizer = transformers.AutoTokenizer.from_pretrained("mrm8488/bertin-gpt-j-6B-ES-8bit")
model = GPTJForCausalLM.from_pretrained("hivemind/gpt-j-6B-8bit", pad_token_id=tokenizer.eos_token_id, low_cpu_mem_usage=True).to(device)
prompt = tokenizer("El sentido de la vida es", return_tensors='pt')
prompt = {key: value.to(device) for key, value in prompt.items()}
out = model.generate(**prompt, max_length=64, do_sample=True)
print(tokenizer.decode(out[0]))
``` |
facebook/genre-kilt | d5c718b8bb571121a0d74d5bbc9a1d69a9a9c312 | 2022-06-14T14:05:20.000Z | [
"pytorch",
"tf",
"jax",
"bart",
"text2text-generation",
"en",
"arxiv:2010.00904",
"arxiv:1910.13461",
"arxiv:2009.02252",
"transformers",
"retrieval",
"entity-retrieval",
"named-entity-disambiguation",
"entity-disambiguation",
"named-entity-linking",
"entity-linking",
"autotrain_compatible"
] | text2text-generation | false | facebook | null | facebook/genre-kilt | 70 | null | transformers | 5,396 | ---
language:
- en
tags:
- retrieval
- entity-retrieval
- named-entity-disambiguation
- entity-disambiguation
- named-entity-linking
- entity-linking
- text2text-generation
---
# GENRE
The GENRE (Generative ENtity REtrieval) system as presented in [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904) implemented in pytorch.
In a nutshell, GENRE uses a sequence-to-sequence approach to entity retrieval (e.g., linking), based on fine-tuned [BART](https://arxiv.org/abs/1910.13461) architecture. GENRE performs retrieval generating the unique entity name conditioned on the input text using constrained beam search to only generate valid identifiers. The model was first released in the [facebookresearch/GENRE](https://github.com/facebookresearch/GENRE) repository using `fairseq` (the `transformers` models are obtained with a conversion script similar to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py).
This model was trained on the full training set of [KILT](https://arxiv.org/abs/2009.02252) (i.e., 11 datasets for fact-checking, entity-linking, slot filling, dialogue, open-domain extractive and abstractive QA).
## BibTeX entry and citation info
**Please consider citing our works if you use code from this repository.**
```bibtex
@inproceedings{decao2020autoregressive,
title={Autoregressive Entity Retrieval},
author={Nicola {De Cao} and Gautier Izacard and Sebastian Riedel and Fabio Petroni},
booktitle={International Conference on Learning Representations},
url={https://openreview.net/forum?id=5k8F6UU39V},
year={2021}
}
```
## Usage
Here is an example of generation for Wikipedia page retrieval for open-domain fact-checking:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# OPTIONAL: load the prefix tree (trie), you need to additionally download
# https://huggingface.co/facebook/genre-kilt/blob/main/trie.py and
# https://huggingface.co/facebook/genre-kilt/blob/main/kilt_titles_trie_dict.pkl
# import pickle
# from trie import Trie
# with open("kilt_titles_trie_dict.pkl", "rb") as f:
# trie = Trie.load_from_dict(pickle.load(f))
tokenizer = AutoTokenizer.from_pretrained("facebook/genre-kilt")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/genre-kilt").eval()
sentences = ["Einstein was a German physicist."]
outputs = model.generate(
**tokenizer(sentences, return_tensors="pt"),
num_beams=5,
num_return_sequences=5,
# OPTIONAL: use constrained beam search
# prefix_allowed_tokens_fn=lambda batch_id, sent: trie.get(sent.tolist()),
)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
which outputs the following top-5 predictions (using constrained beam search)
```
['Albert Einstein',
'Erwin Schrödinger',
'Werner Bruschke',
'Werner von Habsburg',
'Werner von Moltke']
``` |
microsoft/markuplm-large-finetuned-websrc | a9fe69d1cb7a60734e3bc18060edcf0b4b9310ee | 2022-06-14T13:57:35.000Z | [
"pytorch",
"markuplm",
"question-answering",
"arxiv:2110.08518",
"transformers",
"autotrain_compatible"
] | question-answering | false | microsoft | null | microsoft/markuplm-large-finetuned-websrc | 70 | null | transformers | 5,397 | # MarkupLM
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
|
prodm93/bert-rp-1-sentchunks | 91e55fc4c609a6bc40ba7533e35128683d375131 | 2022-07-04T19:21:23.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | prodm93 | null | prodm93/bert-rp-1-sentchunks | 70 | null | transformers | 5,398 | Entry not found |
cybertelx/DialoGPT-small-drunkic0n | b31eccbc3cec34289e091cc506e53c23bf47fc71 | 2022-07-14T14:45:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | cybertelx | null | cybertelx/DialoGPT-small-drunkic0n | 70 | null | transformers | 5,399 | ---
tags:
- conversational
---
# Drunk IC-0n
IC-0n (or Icon) is a murderous AI protagonist of the Internecion Cube series. This is an attempt to build her in real life (haha it failed, and actually gladly)
This uses Microsoft's DialoGPT-small and it is trained on all of Icon's lines throughout the series from episode 1-3 (only 50 though, so low training data)
It's "drunk" because it is very incoherent. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.