modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
hackathon-pln-es/t5-small-finetuned-spanish-to-quechua | 1a1c1a24c23a9b4cff2444718dcbc958875018cb | 2022-04-03T05:42:56.000Z | [
"pytorch",
"t5",
"text2text-generation",
"es",
"qu",
"transformers",
"quechua",
"translation",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | hackathon-pln-es | null | hackathon-pln-es/t5-small-finetuned-spanish-to-quechua | 90 | 4 | transformers | 4,800 | ---
language:
- es
- qu
tags:
- quechua
- translation
- spanish
license: apache-2.0
metrics:
- bleu
- sacrebleu
widget:
- text: "Dios ama a los hombres"
- text: "A pesar de todo, soy feliz"
- text: "¿Qué harán allí?"
- text: "Debes aprender a respetar"
---
# Spanish to Quechua translator
This model is a finetuned version of the [t5-small](https://huggingface.co/t5-small).
## Model description
t5-small-finetuned-spanish-to-quechua has trained for 46 epochs with 102 747 sentences, the validation was performed with 12 844 sentences and 12 843 sentences were used for the test.
## Intended uses & limitations
A large part of the dataset has been extracted from biblical texts, which makes the model perform better with certain types of sentences.
### How to use
You can import this model as follows:
```python
>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
>>> model_name = 'hackathon-pln-es/t5-small-finetuned-spanish-to-quechua'
>>> model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```
To translate you can do:
```python
>>> sentence = "Entonces dijo"
>>> input = tokenizer(sentence, return_tensors="pt")
>>> output = model.generate(input["input_ids"], max_length=40, num_beams=4, early_stopping=True)
>>> print('Original Sentence: {} \nTranslated sentence: {}'.format(sentence, tokenizer.decode(output[0])))
```
### Limitations and bias
Actually this model only can translate to Quechua of Ayacucho.
## Training data
For train this model we use [Spanish to Quechua dataset](https://huggingface.co/datasets/hackathon-pln-es/spanish-to-quechua)
## Evaluation results
We obtained the following metrics during the training process:
- eval_bleu = 2.9691
- eval_loss = 1.2064628601074219
## Team members
- [Sara Benel](https://huggingface.co/sbenel)
- [Jose Vílchez](https://huggingface.co/JCarlos)
|
ccdv/lsg-bart-base-16384-pubmed | ee321b6636965d14801b58b61362a34b5704a23c | 2022-07-25T05:29:10.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:scientific_papers",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ccdv | null | ccdv/lsg-bart-base-16384-pubmed | 90 | 3 | transformers | 4,801 | ---
language:
- en
tags:
- summarization
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: ccdv/lsg-bart-base-16384-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
**This model relies on a custom modeling file, you need to add trust_remote_code=True**\
**See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384-pubmed", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384-pubmed", trust_remote_code=True)
text = "Replace by what you want."
pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
generated_text = pipe(
text,
truncation=True,
max_length=64,
no_repeat_ngram_size=7,
num_beams=2,
early_stopping=True
)
```
# ccdv/lsg-bart-base-16384-pubmed
This model is a fine-tuned version of [ccdv/lsg-bart-base-4096-pubmed](https://huggingface.co/ccdv/lsg-bart-base-4096-pubmed) on the [scientific_papers pubmed](https://huggingface.co/datasets/scientific_papers) dataset. \
The model is converted to handle 16384 long sequences and fine-tuned accordingly during 1 epoch. \
It achieves the following results on the test set:
| Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 16384 | 64 | Full | 256 | 0 | 768 | 48.32 | 22.52 | 29.36 | 44.57 |
| 16384 | 1 | Full | 256 | 0 | 768 | 48.26 | 22.53 | 29.40 | 44.51 |
| 16384 | 64 | Global only | 256 | 0 | 768 | 48.12 | 20.46 | 29.34 | 44.40 |
| 16384 | 1 | None | 256 | 0 | 768 | 48.03 | 22.42 | 29.28 | 44.32 |
Reference model:
| Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
|:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
| 4096 | 1 | - | 256 | 0 | 768 | 47.37 | 21.74 | 28.59 | 43.67 |
## Model description
The model relies on Local-Sparse-Global attention to handle long sequences:

The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
The model is warm started from [ccdv/lsg-bart-base-4096-pubmed](https://huggingface.co/ccdv/lsg-bart-base-4096-pubmed), converted to handle long sequences (encoder only) and fine tuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Generate hyperparameters
The following hyperparameters were used during generation:
- dataset_name: scientific_papers
- dataset_config_name: pubmed
- eval_batch_size: 4
- eval_samples: 6658
- early_stopping: True
- ignore_pad_token_for_loss: True
- length_penalty: 2.0
- max_length: 512
- min_length: 128
- num_beams: 5
- no_repeat_ngram_size: None
- seed: 123
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
sanjay-m1/parrot-adequacy-on-BART | 0514f368694920fc841582af98ed94f720a00df2 | 2022-05-21T17:37:29.000Z | [
"pytorch",
"bart",
"transformers"
] | null | false | sanjay-m1 | null | sanjay-m1/parrot-adequacy-on-BART | 90 | null | transformers | 4,802 | # Parrot
THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER
## 1. What is Parrot?
Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the [github page](https://github.com/PrithivirajDamodaran/Parrot) or The model card prithivida/parrot_paraphraser_on_T5
|
eunsour/en-ko-transliterator | 169f2b5772f9347985d0a7ab462685c801b8cbfe | 2022-06-28T14:32:25.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | eunsour | null | eunsour/en-ko-transliterator | 90 | 0 | transformers | 4,803 | ```
!pip install simpletransformers
from simpletransformers.t5 import T5Model
model = T5Model("mt5", "eunsour/en-ko-transliterator", use_cuda=False)
print(model.predict(["transformer"]))
print(model.predict(["attention"]))
``` |
JulesBelveze/t5-small-headline-generator | 58ef632459769780866fa6d16a9cfbc69add8cb0 | 2022-07-01T05:17:57.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:JulesBelveze/tldr_news",
"transformers",
"summarization",
"headline-generation",
"text-generation",
"autotrain_compatible"
] | summarization | false | JulesBelveze | null | JulesBelveze/t5-small-headline-generator | 90 | 2 | transformers | 4,804 | ---
language:
- en
tags:
- summarization
- headline-generation
- text-generation
datasets:
- JulesBelveze/tldr_news
metrics:
- rouge1
- rouge2
- rougeL
- rougeLsum
---
# t5-small for headline generation
This model is a [t5-small](https://huggingface.co/t5-small) fine-tuned for headline generation using
the [JulesBelveze/tldr_news](https://huggingface.co/datasets/JulesBelveze/tldr_news) dataset.
## Using this model
```python
import re
from transformers import AutoTokenizer, T5ForConditionalGeneration
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """US FCC commissioner Brendan Carr has asked Apple and Google to remove TikTok from their app stores. The video app is owned by Chinese company ByteDance. Carr claims that TikTok functions as a surveillance tool that harvests extensive amounts of personal and sensitive data from US citizens. TikTok says its data access approval process is overseen by a US-based security team and that data is only accessed on an as-needed basis under strict controls."""
model_name = "JulesBelveze/t5-small-headline-generator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=384
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Evaluation
| Metric | Score |
|------------|---------|
| ROUGE 1 | 44.2379 |
| ROUGE 2 | 17.4961 |
| ROUGE L | 41.1119 |
| ROUGE Lsum | 41.1256 | |
Amrrs/indian-foods | f17394f1fb1e91fc497c5124805a0ca7db91dc03 | 2021-07-20T10:20:55.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | Amrrs | null | Amrrs/indian-foods | 89 | 1 | transformers | 4,805 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: indian-foods
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9285714030265808
---
# indian-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### idli

#### kachori

#### pani puri

#### samosa

#### vada pav
 |
DeepChem/SmilesTokenizer_PubChem_1M | 4922f620c0488ad0362ba89430f8c96041072c29 | 2021-05-31T20:54:05.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | DeepChem | null | DeepChem/SmilesTokenizer_PubChem_1M | 89 | null | transformers | 4,806 | RoBERTa model trained on 1M SMILES from PubChem 77M set in MoleculeNet. Uses Smiles-Tokenizer |
Helsinki-NLP/opus-mt-de-ZH | 93d4bc065a572a35ab1f1110ffeccc9740444a42 | 2021-09-09T21:30:07.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"zh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-ZH | 89 | 1 | transformers | 4,807 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-de-ZH
* source languages: de
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.de.zh | 24.4 | 0.335 |
|
Helsinki-NLP/opus-mt-es-vi | b3b69971697a56c967a303713e0f4dcd06256311 | 2021-01-18T08:29:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-vi | 89 | null | transformers | 4,808 | ---
language:
- es
- vi
tags:
- translation
license: apache-2.0
---
### spa-vie
* source group: Spanish
* target group: Vietnamese
* OPUS readme: [spa-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-vie/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.vie | 33.1 | 0.508 |
### System Info:
- hf_name: spa-vie
- source_languages: spa
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'vi']
- src_constituents: {'spa'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-vie/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: vie
- short_pair: es-vi
- chrF2_score: 0.508
- bleu: 33.1
- brevity_penalty: 0.98
- ref_len: 4654.0
- src_name: Spanish
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: vi
- prefer_old: False
- long_pair: spa-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
MKaan/multilingual-cpv-sector-classifier | 593b7b5901a38e28cdf27fe3381ae96dc172415e | 2021-11-28T13:09:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"eu",
"public procurement",
"cpv",
"sector",
"multilingual",
"license:apache-2.0"
] | text-classification | false | MKaan | null | MKaan/multilingual-cpv-sector-classifier | 89 | 3 | transformers | 4,809 | ---
license: apache-2.0
tags:
- eu
- public procurement
- cpv
- sector
- multilingual
- transformers
- text-classification
widget:
- text: "Oppegård municipality, hereafter called the contracting authority, intends to enter into a framework agreement with one supplier for the procurement of fresh bread and bakery products for Oppegård municipality. The contract is estimated to NOK 1 400 000 per annum excluding VAT The total for the entire period including options is NOK 5 600 000 excluding VAT"
---
# multilingual-cpv-sector-classifier
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on [the Tenders Economic Daily Public Procurement Data](https://simap.ted.europa.eu/en).
It achieves the following results on the evaluation set:
- F1 Score: 0.686
## Model description
The model takes procurement descriptions written in any of [104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) and classifies them into 45 sector classes represented by [CPV(Common Procurement Vocabulary)](https://simap.ted.europa.eu/en_GB/web/simap/cpv) code descriptions as listed below.
| Common Procurement Vocabulary |
|:-----------------------------|
| Administration, defence and social security services. 👮♀️ |
| Agricultural machinery. 🚜 |
| Agricultural, farming, fishing, forestry and related products. 🌾 |
| Agricultural, forestry, horticultural, aquacultural and apicultural services. 👨🏿🌾 |
| Architectural, construction, engineering and inspection services. 👷♂️ |
| Business services: law, marketing, consulting, recruitment, printing and security. 👩💼 |
| Chemical products. 🧪 |
| Clothing, footwear, luggage articles and accessories. 👖 |
| Collected and purified water. 🌊 |
| Construction structures and materials; auxiliary products to construction (excepts electric apparatus). 🧱 |
| Construction work. 🏗️ |
| Education and training services. 👩🏿🏫 |
| Electrical machinery, apparatus, equipment and consumables; Lighting. ⚡ |
| Financial and insurance services. 👨💼 |
| Food, beverages, tobacco and related products. 🍽️ |
| Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products. 🗄️ |
| Health and social work services. 👨🏽⚕️ |
| Hotel, restaurant and retail trade services. 🏨 |
| IT services: consulting, software development, Internet and support. 🖥️ |
| Industrial machinery. 🏭 |
| Installation services (except software). 🛠️ |
| Laboratory, optical and precision equipments (excl. glasses). 🔬 |
| Leather and textile fabrics, plastic and rubber materials. 🧵 |
| Machinery for mining, quarrying, construction equipment. ⛏️ |
| Medical equipments, pharmaceuticals and personal care products. 💉 |
| Mining, basic metals and related products. ⚙️ |
| Musical instruments, sport goods, games, toys, handicraft, art materials and accessories. 🎸 |
| Office and computing machinery, equipment and supplies except furniture and software packages. 🖨️ |
| Other community, social and personal services. 🧑🏽🤝🧑🏽 |
| Petroleum products, fuel, electricity and other sources of energy. 🔋 |
| Postal and telecommunications services. 📶 |
| Printed matter and related products. 📰 |
| Public utilities. ⛲ |
| Radio, television, communication, telecommunication and related equipment. 📡 |
| Real estate services. 🏠 |
| Recreational, cultural and sporting services. 🚴 |
| Repair and maintenance services. 🔧 |
| Research and development services and related consultancy services. 👩🔬 |
| Security, fire-fighting, police and defence equipment. 🧯 |
| Services related to the oil and gas industry. ⛽ |
| Sewage-, refuse-, cleaning-, and environmental services. 🧹 |
| Software package and information systems. 🔣 |
| Supporting and auxiliary transport services; travel agencies services. 🚃 |
| Transport equipment and auxiliary products to transportation. 🚌 |
| Transport services (excl. Waste transport). 💺
## Intended uses & limitations
- Input description should be written in any of [the 104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) that MBERT supports.
- The model is just evaluated in 22 languages. Thus there is no information about the performances in other languages.
- The domain is also restricted by the awarded procurement notice descriptions in European Union. Evaluating on whole document texts might change the performance.
## Training and evaluation data
- The whole data consists of 744,360 rows. Shuffled and split into train and validation sets by using 80%/20% manner.
- Each description represents a unique contract notice description awarded between 2011 and 2018.
- Both training and validation data have contract notice descriptions written in 22 European Languages. (Malta and Irish are extracted due to scarcity compared to whole data)
## Training procedure
The training procedure has been completed on Google Cloud V3-8 TPUs. Thanks [Google](https://sites.research.google/trc/about/) for giving the access to Cloud TPUs
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- num_epochs: 3
- gradient_accumulation_steps: 8
- batch_size_per_device: 4
- total_train_batch_size: 32
### Training results
| Epoch | Step | F1 Score|
|:-----:|:------:|:------:|
| 1 | 18,609 | 0.630 |
| 2 | 37,218 | 0.674 |
| 3 | 55,827 | 0.686 |
| Language| F1 Score| Test Size|
|:-----:|:-----:|:-----:|
| PL| 0.759| 13950|
| RO| 0.736| 3522|
| SK| 0.719| 1122|
| LT| 0.687| 2424|
| HU| 0.681| 1879|
| BG| 0.675| 2459|
| CS| 0.668| 2694|
| LV| 0.664| 836|
| DE| 0.645| 35354|
| FI| 0.644| 1898|
| ES| 0.643| 7483|
| PT| 0.631| 874|
| EN| 0.631| 16615|
| HR| 0.626| 865|
| IT| 0.626| 8035|
| NL| 0.624| 5640|
| EL| 0.623| 1724|
| SL| 0.615| 482|
| SV| 0.607| 3326|
| DA| 0.603| 1925|
| FR| 0.601| 33113|
| ET| 0.572| 458|| |
NDugar/v3-Large-mnli | 291554748314581adf4398af85d52b5f14f7f70e | 2021-12-26T15:27:41.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"transformers",
"deberta-v1",
"deberta-mnli",
"license:mit",
"zero-shot-classification"
] | zero-shot-classification | false | NDugar | null | NDugar/v3-Large-mnli | 89 | 1 | transformers | 4,810 | ---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
pipeline_tag: zero-shot-classification
---
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4103
- Accuracy: 0.9175
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3631 | 1.0 | 49088 | 0.3129 | 0.9130 |
| 0.2267 | 2.0 | 98176 | 0.4157 | 0.9153 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
PaulLerner/dpr_context_encoder_triviaqa_without_viquae | d1c83af9b648a6dbe234dfb51c4713ecceed2d9f | 2022-02-18T13:58:18.000Z | [
"pytorch",
"dpr",
"transformers"
] | null | false | PaulLerner | null | PaulLerner/dpr_context_encoder_triviaqa_without_viquae | 89 | null | transformers | 4,811 | Entry not found |
cosmoquester/bart-ko-small | 32c6ec5844197a774d6460e7fc83812af9dc24e3 | 2021-08-28T05:09:54.000Z | [
"pytorch",
"tf",
"bart",
"text2text-generation",
"ko",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | cosmoquester | null | cosmoquester/bart-ko-small | 89 | null | transformers | 4,812 | ---
language: ko
---
# Pretrained BART in Korean
This is pretrained BART model with multiple Korean Datasets.
I used multiple datasets for generalizing the model for both colloquial and written texts.
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain).
When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example.
```
[BOS] 안녕하세요? 반가워요~~ [EOS]
```
You can also test mask filling performance using `[MASK]` token like this.
```
[BOS] [MASK] 먹었어? [EOS]
```
## Benchmark
<style>
table {
border-collapse: collapse;
border-style: hidden;
width: 100%;
}
td, th {
border: 1px solid #4d5562;
padding: 8px;
}
</style>
<table>
<tr>
<th>Dataset</th>
<td>KLUE NLI dev</th>
<td>NSMC test</td>
<td>QuestionPair test</td>
<td colspan="2">KLUE TC dev</td>
<td colspan="3">KLUE STS dev</td>
<td colspan="3">KorSTS dev</td>
<td colspan="2">HateSpeech dev</td>
</tr>
<tr>
<th>Metric</th>
<!-- KLUE NLI -->
<td>Acc</th>
<!-- NSMC -->
<td>Acc</td>
<!-- QuestionPair -->
<td>Acc</td>
<!-- KLUE TC -->
<td>Acc</td>
<td>F1</td>
<!-- KLUE STS -->
<td>F1</td>
<td>Pearson</td>
<td>Spearman</td>
<!-- KorSTS -->
<td>F1</td>
<td>Pearson</td>
<td>Spearman</td>
<!-- HateSpeech -->
<td>Bias Acc</td>
<td>Hate Acc</td>
</tr>
<tr>
<th>Score</th>
<!-- KLUE NLI -->
<td>0.639</th>
<!-- NSMC -->
<td>0.8721</td>
<!-- QuestionPair -->
<td>0.905</td>
<!-- KLUE TC -->
<td>0.8551</td>
<td>0.8515</td>
<!-- KLUE STS -->
<td>0.7406</td>
<td>0.7593</td>
<td>0.7551</td>
<!-- KorSTS -->
<td>0.7897</td>
<td>0.7269</td>
<td>0.7037</td>
<!-- HateSpeech -->
<td>0.8068</td>
<td>0.5966</td>
</tr>
</table>
- The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab.
## Used Datasets
### [모두의 말뭉치](https://corpus.korean.go.kr/)
- 일상 대화 말뭉치 2020
- 구어 말뭉치
- 문어 말뭉치
- 신문 말뭉치
### AIhub
- [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717)
- [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714)
- [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978)
- [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105)
- [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718)
### [세종 말뭉치](https://ithub.korean.go.kr/)
|
mrm8488/deberta-v3-small-finetuned-sst2 | 94d075b4095d80dde2c23ad513ebcac7bfe7f0ad | 2021-11-21T19:17:56.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"deberta-v3",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/deberta-v3-small-finetuned-sst2 | 89 | null | transformers | 4,813 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
- deberta-v3
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-small
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9403669724770642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa v3 (small) fine-tuned on SST2
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2134
- Accuracy: 0.9404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.176 | 1.0 | 4210 | 0.2134 | 0.9404 |
| 0.1254 | 2.0 | 8420 | 0.2362 | 0.9415 |
| 0.0957 | 3.0 | 12630 | 0.3187 | 0.9335 |
| 0.0673 | 4.0 | 16840 | 0.3039 | 0.9266 |
| 0.0457 | 5.0 | 21050 | 0.3521 | 0.9312 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
salesken/paraphrase_generation | 1141e8ba54581a8130c4adcc407ae85525d1cf4f | 2021-05-23T12:33:04.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"salesken",
"license:apache-2.0"
] | text-generation | false | salesken | null | salesken/paraphrase_generation | 89 | 1 | transformers | 4,814 | ---
language: en
thumbnail: https://salesken.ai/assets/images/logo.png
license: apache-2.0
inference: false
widget:
- text: "every moment is a fresh beginning"
tags: salesken
---
Use this model to generate variations to augment the training data used for NLU systems.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
else :
device = "cpu"
tokenizer = AutoTokenizer.from_pretrained("salesken/paraphrase_generation")
model = AutoModelWithLMHead.from_pretrained("salesken/paraphrase_generation").to(device)
input_query="every moment is a fresh beginning"
query= input_query + " ~~ "
input_ids = tokenizer.encode(query.lower(), return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
num_beams=1,
max_length=128,
temperature=0.9,
top_p= 0.99,
top_k = 30,
num_return_sequences=40)
paraphrases = []
for i in range(len(sample_outputs)):
r = tokenizer.decode(sample_outputs[i], skip_special_tokens=True).split('||')[0]
r = r.split(' ~~ ')[1]
if r not in paraphrases:
paraphrases.append(r)
print(paraphrases)
```
To evaluate if a paraphrase is a semantic variation to the input query or just a surface level variation & rank the generated paraphrases, use the following model:
https://huggingface.co/salesken/paraphrase_diversity_ranker
|
timm/vit_huge_patch14_224_in21k | bbf572e74b9cf2a10daa9461505cb506c9189c4a | 2021-03-18T10:58:13.000Z | [
"pytorch",
"dataset:imagenet_21k",
"timm",
"image-classification",
"vision-transformer",
"license:apache-2.0"
] | image-classification | false | timm | null | timm/vit_huge_patch14_224_in21k | 89 | null | timm | 4,815 | ---
tags:
- image-classification
- timm
- vision-transformer
license: apache-2.0
datasets:
- imagenet_21k
inference: false
---
# ViT-H/14 (ImageNet-21k)
...
|
yongzx/gpt2-finetuned-oscar-fr-ori-tok | b3f3f7e2dc1729199e44424c6b7edc1aaafa18f6 | 2021-12-09T06:37:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"fr",
"dataset:oscar",
"transformers",
"license:mit"
] | text-generation | false | yongzx | null | yongzx/gpt2-finetuned-oscar-fr-ori-tok | 89 | null | transformers | 4,816 | ---
language:
- fr
tags:
- text-generation
license: mit
datasets:
- oscar
widget:
- text: "Je suis ravi de vous "
---
# GPT-2 finetuned on French Dataset
### Tokenizer
We use GPT-2 tokenizer.
### Model
We finetuned the `wte` and `wpe` layers of GPT-2 (while freezing the parameters of all other layers) on OSCAR's `unshuffled_original_fr` French data subset. We used [Huggingface's code](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) for fine-tuning the causal language model GPT-2, but with the following parameters changed
```
- preprocessing_num_workers: 8
- per_device_train_batch_size: 2
- gradient_accumulation_steps: 4
- per_device_eval_batch_size: 2
- eval_accumulation_steps: 4
- eval_steps: 1000
- evaluation_strategy: "steps"
- max_eval_samples: 5000
```
**Final checkpoint**: checkpoint-76500 |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_editorials_05_03_2022-06_21_38 | 328afb20b84511cb9c350d495c6a3a90922aefc1 | 2022-03-05T05:24:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_editorials_05_03_2022-06_21_38 | 89 | null | transformers | 4,817 | Entry not found |
jkhan447/sentiment-model-sample-27go-emotion | 2e63119a9d2b37e5e054e903f62cded65ee34a70 | 2022-04-01T08:13:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:go_emotions",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | jkhan447 | null | jkhan447/sentiment-model-sample-27go-emotion | 89 | null | transformers | 4,818 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
model-index:
- name: sentiment-model-sample-27go-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.5888888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model-sample-27go-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1765
- Accuracy: 0.5889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
ismail-lucifer011/autotrain-company_all-903429548 | acbdc6779e48dfde49f2ba77d6f2cfb431333d16 | 2022-05-24T14:24:20.000Z | [
"pytorch",
"distilbert",
"token-classification",
"en",
"dataset:ismail-lucifer011/autotrain-data-company_all",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
] | token-classification | false | ismail-lucifer011 | null | ismail-lucifer011/autotrain-company_all-903429548 | 89 | null | transformers | 4,819 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ismail-lucifer011/autotrain-data-company_all
co2_eq_emissions: 0.848790823793881
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 903429548
- CO2 Emissions (in grams): 0.848790823793881
## Validation Metrics
- Loss: 0.006148040760308504
- Accuracy: 0.9979930566588805
- Precision: 0.9814944904963571
- Recall: 0.9817210885036588
- F1: 0.9816077764228254
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ismail-lucifer011/autotrain-company_all-903429548
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ismail-lucifer011/autotrain-company_all-903429548", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ismail-lucifer011/autotrain-company_all-903429548", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Genario/multilingual_paraphrase | 40fa64fac60a68006923e19daefa6545b780888c | 2022-06-27T13:01:00.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"multilingual",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | Genario | null | Genario/multilingual_paraphrase | 89 | null | sentence-transformers | 4,820 | ---
pipeline_tag: feature-extraction
language: multilingual
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. |
nielsr/convnext-tiny-maskrcnn | 4e675c60d14587de898a2803cc34dc733f9a4899 | 2022-06-28T10:05:05.000Z | [
"pytorch",
"convnext_maskrcnn",
"transformers"
] | null | false | nielsr | null | nielsr/convnext-tiny-maskrcnn | 89 | null | transformers | 4,821 | Entry not found |
throwaway112358112358/DialoGPT-medium-script | 890ae93de34b8c42ca3e5b21a3ef09ca410f4fd0 | 2022-07-22T06:23:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | throwaway112358112358 | null | throwaway112358112358/DialoGPT-medium-script | 89 | null | transformers | 4,822 | ---
tags:
- conversational
---
# America DialoGPT Model |
Helsinki-NLP/opus-mt-af-es | 2b07ca080f32eb33642e2da486f8e5846f1bd7b7 | 2021-01-18T07:46:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"af",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-af-es | 88 | null | transformers | 4,823 | ---
language:
- af
- es
tags:
- translation
license: apache-2.0
---
### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.spa | 49.9 | 0.680 |
### System Info:
- hf_name: afr-spa
- source_languages: afr
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'es']
- src_constituents: {'afr'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: spa
- short_pair: af-es
- chrF2_score: 0.68
- bleu: 49.9
- brevity_penalty: 1.0
- ref_len: 2783.0
- src_name: Afrikaans
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: es
- prefer_old: False
- long_pair: afr-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-pl | 623776f6e081085cf6c215c9fd91326fbf80e090 | 2021-01-18T07:47:35.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"pl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-pl | 88 | null | transformers | 4,824 | ---
language:
- ar
- pl
tags:
- translation
license: apache-2.0
---
### ara-pol
* source group: Arabic
* target group: Polish
* OPUS readme: [ara-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md)
* model: transformer
* source language(s): ara arz
* target language(s): pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.pol | 38.0 | 0.623 |
### System Info:
- hf_name: ara-pol
- source_languages: ara
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'pl']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: pol
- short_pair: ar-pl
- chrF2_score: 0.623
- bleu: 38.0
- brevity_penalty: 0.948
- ref_len: 1171.0
- src_name: Arabic
- tgt_name: Polish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: pl
- prefer_old: False
- long_pair: ara-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Ivo/emscad-skill-extraction-token-classification | e8aea8bda132c66d6408535e2d24f75e021a3006 | 2021-06-15T09:37:47.000Z | [
"pytorch",
"tf",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Ivo | null | Ivo/emscad-skill-extraction-token-classification | 88 | null | transformers | 4,825 | Entry not found |
dbernsohn/roberta-java | 17434de4294ec97090e7a2286d5b395013b1bfe1 | 2021-05-20T15:54:29.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"Java",
"dataset:code_search_net",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbernsohn | null | dbernsohn/roberta-java | 88 | 1 | transformers | 4,826 | # roberta-java
---
language: Java
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Java** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-java")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-java")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
String[] cars = {"Volvo", "BMW", "Ford", "Mazda"};
for (String i : cars) {
System.out.<mask>(i);
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('println', 0.32571351528167725),
# ('get', 0.2897663116455078),
# ('remove', 0.0637081190943718),
# ('exit', 0.058875661343336105),
# ('print', 0.034190207719802856)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/) |
flax-community/pino-bigbird-roberta-base | 81688aa15c56f034bdb694e75f526c884231de6d | 2022-01-13T15:29:26.000Z | [
"pytorch",
"jax",
"tensorboard",
"big_bird",
"fill-mask",
"nl",
"dataset:mC4",
"dataset:Dutch_news",
"arxiv:2007.14062",
"transformers",
"autotrain_compatible"
] | fill-mask | false | flax-community | null | flax-community/pino-bigbird-roberta-base | 88 | 1 | transformers | 4,827 | ---
language: nl
datasets:
- mC4
- Dutch_news
---
# Pino (Dutch BigBird) base model
Created by [Dat Nguyen](https://www.linkedin.com/in/dat-nguyen-49a641138/) & [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) during the [Hugging Face community week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104)
(Not finished yet)
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle.
It is a pretrained model on Dutch language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird).
## Model description
BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts.
## How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BigBirdModel
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base")
# you can change `attention_type` to full attention like this:
model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base", attention_type="original_full")
# you can change `block_size` & `num_random_blocks` like this:
model = BigBirdModel.from_pretrained("flax-community/pino-bigbird-roberta-base", block_size=16, num_random_blocks=2)
```
## Training Data
This model is pre-trained on four publicly available datasets: **mC4**, and scraped **Dutch news** from NRC en Nu.nl. It uses the the fast universal Byte-level BPE (BBPE) in contrast to the sentence piece tokenizer and vocabulary as RoBERTa (which is in turn borrowed from GPT2).
## Training Procedure
The data is cleaned as follows:
Remove texts containing HTML codes / javascript codes / loremipsum / policies
Remove lines without end mark.
Remove too short texts, words
Remove too long texts, words
Remove bad words
## BibTeX entry and citation info
```tex
@misc{zaheer2021big,
title={Big Bird: Transformers for Longer Sequences},
author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed},
year={2021},
eprint={2007.14062},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
gatecitypreservation/architectural_styles | ae57ad5ef7d339c28bee56b608e4e8830284c8f3 | 2022-01-07T18:41:50.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | gatecitypreservation | null | gatecitypreservation/architectural_styles | 88 | 1 | transformers | 4,828 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: architectural_styles
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7796609997749329
---
### What style is that?
This model can help identify five architectural styles that were prominent in the early to mid 20th century. Check back for updates including more architectural styles and more accurate predictions as this model diversifies and improves its training.
Upload a photograph of a building to the File Uploader on the right. The Image Classifier will predict its architectural style using a database of over 700 images. Scroll down to read more about each style.
### Classical Revival (1895 - 1950)
The Classical Revival or Neoclassical style is one of the most commonly seen across the state and the country. This style was inspired by the World's Columbian Exposition in Chicago held in 1893 which promoted a renewed interest in the classical forms. This style encompasses many different styles, including Colonial Revival, Greek Revival, Neoclassical Revival and Mediterranean Revival. Colonial Revival is most commonly used in residential dwellings, while Greek and Neoclassical Revival styles are commonly used in commercial buildings like banks, post offices, and municipal buildings.

#### Queen Anne (1880-1910)
The Queen Anne style was one of a number of popular architectural styles that emerged in the United States during the Victorian Period. It ranges from high style, like the image pictured here, to more vernacular styles that exhibit the Queen Anne form without its high style architectural details.

#### Craftsman Bungalow (1900-1930)
The terms “craftsman” and “bungalow” are often used interchangably, however, “craftsman” refers to the Arts and Crafts movement and is considered an architectural style, whereas “bungalow” is the form of house. Bungalows often exhibit a craftsman style.

#### Tudor Cottage (1910-1950)
Tudor homes are inspired by the Medieval period and can range is size and style. In general, the Tudor style features steeply pitched roofs, often with a cat-slide roof line, predominately brick construction, sometimes accented with half-timber framing, front-facing, prominently placed brick or stone chimneys, and tall windows with rectangular or diamond-shaped panes. Front doors are typically off-center with a round arch at the top of the door or doorway.

#### Mid-Century Modern Ranch (1930-1970)
The Ranch style originated in southern California in the mid-1930s. In the 1940s, the Ranch was one of the small house types financed by the Federal Housing Administration (FHA), along with Minimal Traditional and other small house styles. The Ranch house began to pick up popularity as the financial controls that encouraged small house building lifted following WWII; by the 1950s it was the most predominant residential style in the country.

This model was created with HuggingPics🤗🖼️ Image Classifier!
Make your own!: [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). |
google/bert_uncased_L-2_H-512_A-8 | 880c57fdc84683dd9ce13a2fdbdd454abc488fb6 | 2021-05-19T17:29:08.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-2_H-512_A-8 | 88 | null | transformers | 4,829 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
google/tapas-small-finetuned-tabfact | 2d1b432ff29227fabbfab71456ddec10581a163f | 2021-11-29T13:07:47.000Z | [
"pytorch",
"tf",
"tapas",
"text-classification",
"en",
"dataset:tab_fact",
"arxiv:2010.00571",
"arxiv:2004.02349",
"transformers",
"sequence-classification",
"license:apache-2.0"
] | text-classification | false | google | null | google/tapas-small-finetuned-tabfact | 88 | null | transformers | 4,830 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
datasets:
- tab_fact
---
# TAPAS small model fine-tuned on Tabular Fact Checking (TabFact)
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_small`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then
jointly train this randomly initialized classification head with the base model on TabFact.
## Intended uses & limitations
You can use this model for classifying whether a sentence is supported or refuted by the contents of a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup
ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Addis Ababa, Ethiopia},
month = {April},
year = {2020}
}
``` |
ibombonato/swin-age-classifier | 5ebf0a390d42254891441ddc0ba9a72564aaa1eb | 2022-02-11T21:42:47.000Z | [
"pytorch",
"tensorboard",
"swin",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | ibombonato | null | ibombonato/swin-age-classifier | 88 | null | transformers | 4,831 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: swin-age-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8174999952316284
---
# swin-age-classifier
Trained on 80 epochs -
Data from: Ai Crowd - Blitz
ai-blitz-xiii - Age Prediction
https://www.aicrowd.com/challenges/ai-blitz-xiii/problems/age-prediction/
Notebook based on HuggingPics
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). |
mrm8488/gpt2-finetuned-recipes-cooking | b4f72aae501e00b4a1ba022ef93cb5269ed9eaf7 | 2021-05-23T10:24:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/gpt2-finetuned-recipes-cooking | 88 | null | transformers | 4,832 | ---
language: en
thumbnail:
widget:
- text: "HuggingFace Cake:"
---
|
navteca/roberta-large-squad2 | 2434d399f7a25d47d72c7080d0e3905e6cd4bceb | 2021-04-06T16:31:09.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | navteca | null | navteca/roberta-large-squad2 | 88 | null | transformers | 4,833 | ---
datasets:
- squad_v2
language: en
license: mit
pipeline_tag: question-answering
tags:
- roberta
- question-answering
---
# Roberta large model for QA (SQuAD 2.0)
This model uses [roberta-large](https://huggingface.co/roberta-large).
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
roberta_model = AutoModelForQuestionAnswering.from_pretrained('navteca/roberta-large-squad2')
roberta_tokenizer = AutoTokenizer.from_pretrained('navteca/roberta-large-squad2')
# Get predictions
nlp = pipeline('question-answering', model=roberta_model, tokenizer=roberta_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.96186668,
# "start": 27,
#}
```
|
prajjwal1/roberta-base-mnli | de6e05e0d9382d5344202c667368602666c8e1b2 | 2021-05-20T19:31:02.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | prajjwal1 | null | prajjwal1/roberta-base-mnli | 88 | null | transformers | 4,834 | Roberta-base trained on MNLI.
| Task | Accuracy |
|---------|----------|
| MNLI | 86.32 |
| MNLI-mm | 86.43 |
You can also check out:
- `prajjwal1/roberta-base-mnli`
- `prajjwal1/roberta-large-mnli`
- `prajjwal1/albert-base-v2-mnli`
- `prajjwal1/albert-base-v1-mnli`
- `prajjwal1/albert-large-v2-mnli`
[@prajjwal_1](https://twitter.com/prajjwal_1)
|
sagorsarker/codeswitch-hineng-pos-lince | 2482a85d58bd6ac78a8d90f82bbcd600584cdd92 | 2021-05-19T01:06:07.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"hi",
"en",
"dataset:lince",
"transformers",
"codeswitching",
"hindi-english",
"pos",
"license:mit",
"autotrain_compatible"
] | token-classification | false | sagorsarker | null | sagorsarker/codeswitch-hineng-pos-lince | 88 | null | transformers | 4,835 | ---
language:
- hi
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- hindi-english
- pos
---
# codeswitch-hineng-pos-lince
This is a pretrained model for **Part of Speech Tagging** of `hindi-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Part-of-Speech Tagging of Hindi-English Mixed Data
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-hineng-pos-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-hineng-pos-lince")
pos_model = pipeline('ner', model=model, tokenizer=tokenizer)
pos_model("put any hindi english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import POS
pos = POS('hin-eng')
text = "" # your mixed sentence
result = pos.tag(text)
print(result)
```
|
tanmaylaud/wav2vec2-large-xlsr-hindi-marathi | 269a6b04018f4d7a6740403b27ed58386bd77788 | 2021-04-19T18:40:07.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"hi",
"dataset:openslr",
"dataset:interspeech_2021_asr",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hindi",
"marathi",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tanmaylaud | null | tanmaylaud/wav2vec2-large-xlsr-hindi-marathi | 88 | null | transformers | 4,836 | ---
language: [mr,hi]
datasets:
- openslr
- interspeech_2021_asr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hindi
- marathi
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Hindi-Marathi by Tanmay Laud
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR hi, OpenSLR mr
type: openslr, interspeech_2021_asr
metrics:
- name: Test WER
type: wer
value: 23.736641
---
# Wav2Vec2-Large-XLSR-53-Hindi-Marathi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Hindi and Marathi using the OpenSLR SLR64 datasets. When using this model, make sure that your speech input is sampled at 16kHz.
## Installation
```bash
pip install git+https://github.com/huggingface/transformers.git datasets librosa torch==1.7.0 torchaudio==0.7.0 jiwer
```
## Eval dataset:
```bash
wget https://www.openslr.org/resources/103/Marathi_test.zip -P data/marathi
unzip -P "K3[2?do9" data/marathi/Marathi_test.zip -d data/marathi/.
tar -xzf data/marathi/Marathi_test.tar.gz -C data/marathi/.
wget https://www.openslr.org/resources/103/Hindi_test.zip -P data/hindi
unzip -P "w9I2{3B*" data/hindi/Hindi_test.zip -d data/hindi/.
tar -xzf data/hindi/Hindi_test.tar.gz -C data/hindi/.
wget -O test.csv 'https://filebin.net/snrz6bt13usv8w2e/test_large.csv?t=ps3n99ho'
#If download does not work, paste this link in browser: https://filebin.net/snrz6bt13usv8w2e/test_large.csv
```
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Marathi text and path fields:
```python
import torch
import torchaudio
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_metric, Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi')
model = Wav2Vec2ForCTC.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi').to("cuda")
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
test_data= test_data.map(speech_file_to_array_fn)
inputs = processor(test_data["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_data["text"][:2])
```
# Code For Evaluation on OpenSLR (Hindi + Marathi : https://filebin.net/snrz6bt13usv8w2e/test_large.csv)
```python
import torchaudio
import torch
import librosa
import numpy as np
import re
test = Dataset.from_csv('test.csv')
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\।]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
test= test.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
# we do not want to group tokens when computing the metrics
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
test = test.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=test["pred_strings"], references=test["sentence"])))
```
#### Code for Evaluation on Common Voice Hindi (Common voice does not have Marathi yet)
```python
import torchaudio
import torch
import librosa
import numpy as np
import re
from datasets import load_metric, load_dataset, Dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi')
model = Wav2Vec2ForCTC.from_pretrained('tanmaylaud/wav2vec2-large-xlsr-hindi-marathi').to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\।]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["sentence"]
batch["speech"] = librosa.resample(np.asarray(batch["speech"]), sampling_rate, 16_000)
batch["sampling_rate"] = 16_000
return batch
#Run prediction on batch
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
# we do not want to group tokens when computing the metrics
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
test_data = load_dataset("common_voice", "hi", split="test")
test_data = test_data.map(speech_file_to_array_fn)
test_data = test_data.map(evaluate, batched=True, batch_size=32)
print("WER: {:2f}".format(100 * wer.compute(predictions=test_data["pred_strings"],
references=test_data["sentence"])))
```
Link to eval notebook : https://colab.research.google.com/drive/1nZRTgKfxCD9cvy90wikTHkg2il3zgcqW#scrollTo=cXWFbhb0d7DT
WER : 23.736641% (OpenSLR Hindi+Marathi Test set : https://filebin.net/snrz6bt13usv8w2e/test_large.csv)
WER: 44.083527% (Common Voice Hindi Test Split) |
facebook/maskformer-swin-large-coco | 194762ea9a98ec6d18abed4023aa346dcf1722c3 | 2022-04-04T16:02:13.000Z | [
"pytorch",
"maskformer",
"dataset:coco",
"arxiv:2107.06278",
"transformers",
"vision",
"image-segmentatiom",
"license:apache-2.0"
] | null | false | facebook | null | facebook/maskformer-swin-large-coco | 88 | null | transformers | 4,837 | ---
license: apache-2.0
tags:
- vision
- image-segmentatiom
datasets:
- coco
---
# Mask
Mask model trained on coco. It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing Mask did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses semantic segmentation with a mask classification paradigm instead.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=maskformer) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
>>> from PIL import Image
>>> import requests
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade")
>>> inputs = feature_extractor(images=image, return_tensors="pt")
>>> model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade")
>>> outputs = model(**inputs)
>>> # model predicts class_queries_logits of shape `(batch_size, num_queries)`
>>> # and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
>>> class_queries_logits = outputs.class_queries_logits
>>> masks_queries_logits = outputs.masks_queries_logits
>>> # you can pass them to feature_extractor for postprocessing
>>> output = feature_extractor.post_process_segmentation(outputs)
>>> output = feature_extractor.post_process_semantic_segmentation(outputs)
>>> output = feature_extractor.post_process_panoptic_segmentation(outputs)
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer). |
cometrain/stocks-news-t5 | c201c487d5821a136c7f4422b99d04d2abf68344 | 2022-04-14T10:08:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:financial-sentiment-analysis",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"autotrain_compatible"
] | text2text-generation | false | cometrain | null | cometrain/stocks-news-t5 | 88 | null | transformers | 4,838 | ---
language:
- en
tags:
- Cometrain AutoCode
- Cometrain AlphaML
datasets:
- financial-sentiment-analysis
widget:
- text: "April 14 (Reuters) - Rio Tinto (RIO.AX), one of the largest Australian mining companies, on Thursday confirmed its exit from the state mining lobby group after raising concerns that its policy on expansion of coal mines did not align with the Paris Climate Agreement."
example_title: "Rio Tinto Decision (Neutral)"
- text: "LONDON, April 13 (Reuters) - Crypto lender Nexo said it has teamed up with global payments company Mastercard (MA.N) to launch on Wednesday what it calls the world's first crypto-backed payment card."
example_title: "New Mastercard & Nexo project (Positive)"
- text: "April 14 (Reuters) - The Russian rouble weakened on Thursday, driven by expectations that Russia may relax its temporary capital control measures further, while stocks fell as the country continued what it calls 'a special military operation' in Ukraine."
example_title: "Crisis in Russia (Negative)"
inference:
parameters:
top_p: 0.9
temperature: 0.5
---
# stocks-news-t5
This model has been automatically fine-tuned and tested as part of the development of the GPT-2-based AutoML framework for accelerated and easy development of NLP enterprise solutions. Fine-tuned [T5](https://huggingface.co/t5-base) allows to analyze financial market news.
Automatically trained on [Financial Sentiment Analysis(2022)](https://www.kaggle.com/datasets/sbhatti/financial-sentiment-analysis) dataset.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name stocks-news --model auto --task 'Machine learning model for finance news analysis' --output transformers
``` |
MiBo/SegBert | 67517fe0e778a6f0a0f332725d8d784bae4ee930 | 2022-05-23T12:12:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | MiBo | null | MiBo/SegBert | 88 | null | transformers | 4,839 | Entry not found |
PrimeQA/tydiqa-boolean-answer-classifier | 548277c8c7a0c04468b1f99841364b731393cfe2 | 2022-06-28T19:52:14.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:2112.07772",
"arxiv:2206.08441",
"transformers",
"license:apache-2.0"
] | text-classification | false | PrimeQA | null | PrimeQA/tydiqa-boolean-answer-classifier | 88 | null | transformers | 4,840 | ---
license: apache-2.0
---
## Model description
An answer classification model for boolean questions based on XLM-RoBERTa.
The answer classifier takes as input a boolean question and a passage, and returns a label (yes, no-answer, no).
The model was initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tuned on the boolean questions from [TyDiQA](https://huggingface.co/datasets/tydiqa), as well as [BoolQ-X](https://arxiv.org/abs/2112.07772#).
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, xlm-roberta-large, may be present in our fine-tuned model, tydiqa-boolean-answer-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean questions in reading comprehension: [examples](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{Rosenthal2021DoAT,
title={Do Answers to Boolean Questions Need Explanations? Yes},
author={Sara Rosenthal and Mihaela A. Bornea and Avirup Sil and Radu Florian and Scott McCarley},
journal={ArXiv},
year={2021},
volume={abs/2112.07772}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
``` |
ml6team/keyphrase-extraction-kbir-openkp | f17ec53f7000ea5f7d760ef6bb49efe5e2af49ca | 2022-06-17T06:45:06.000Z | [
"pytorch",
"roberta",
"token-classification",
"en",
"dataset:midas/openkp",
"arxiv:2112.08547",
"arxiv:1911.02671",
"transformers",
"keyphrase-extraction",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | ml6team | null | ml6team/keyphrase-extraction-kbir-openkp | 88 | null | transformers | 4,841 | ---
language: en
license: mit
tags:
- keyphrase-extraction
datasets:
- midas/openkp
metrics:
- seqeval
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "FoodEx is the largest trade exhibition for food and drinks in Asia, with about 70,000 visitors checking out the products presented by hundreds of participating companies. I was lucky to enter as press; otherwise, visitors must be affiliated with the food industry— and pay ¥5,000 — to enter. The FoodEx menu is global, including everything from cherry beer from Germany and premium Mexican tequila to top-class French and Chinese dumplings. The event was a rare chance to try out both well-known and exotic foods and even see professionals making them. In addition to booths offering traditional Japanese favorites such as udon and maguro sashimi, there were plenty of innovative twists, such as dorayaki , a sweet snack made of two pancakes and a red-bean filling, that came in coffee and tomato flavors. While I was there I was lucky to catch the World Sushi Cup Japan 2013, where top chefs from around the world were competing … and presenting a wide range of styles that you would not normally see in Japan, like the flower makizushi above."
example_title: "Example 2"
model-index:
- name: ml6team/keyphrase-extraction-kbir-openkp
results:
- task:
type: keyphrase-extraction
name: Keyphrase Extraction
dataset:
type: midas/openkp
name: openkp
metrics:
- type: F1 (Seqeval)
value: 0.000
name: F1 (Seqeval)
- type: F1@M
value: 0.387
name: F1@M
---
# 🔑 Keyphrase Extraction Model: KBIR-OpenKP
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [KBIR](https://huggingface.co/bloomberg/KBIR) as its base model and fine-tunes it on the [OpenKP dataset](https://huggingface.co/datasets/midas/openkp). KBIR or Keyphrase Boundary Infilling with Replacement is a pre-trained model which utilizes a multi-task learning setup for optimizing a combined loss of Masked Language Modeling (MLM), Keyphrase Boundary Infilling (KBI) and Keyphrase Replacement Classification (KRC).
You can find more information about the architecture in this [paper](https://arxiv.org/abs/2112.08547).
Keyphrase extraction models are transformer models fine-tuned as a token classification problem where each word in the document is classified as being part of a keyphrase or not.
| Label | Description |
| ----- | ------------------------------- |
| B-KEY | At the beginning of a keyphrase |
| I-KEY | Inside a keyphrase |
| O | Outside a keyphrase |
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* Limited amount of predicted keyphrases.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
### ❓ How To Use
```python
from transformers import (
TokenClassificationPipeline,
AutoModelForTokenClassification,
AutoTokenizer,
)
from transformers.pipelines import AggregationStrategy
import numpy as np
# Define keyphrase extraction pipeline
class KeyphraseExtractionPipeline(TokenClassificationPipeline):
def __init__(self, model, *args, **kwargs):
super().__init__(
model=AutoModelForTokenClassification.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs,
aggregation_strategy=AggregationStrategy.SIMPLE,
)
return np.unique([result.get("word").strip() for result in results])
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-extraction-kbir-openkp"
extractor = KeyphraseExtractionPipeline(model=model_name)
```
```python
# Inference
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("
", " ")
keyphrases = extractor(text)
print(keyphrases)
```
```
# Output
['keyphrase extraction' 'text analysis']
```
## 📚 Training Dataset
[OpenKP](https://github.com/microsoft/OpenKP) is a large-scale, open-domain keyphrase extraction dataset with 148,124 real-world web documents along with 1-3 most relevant human-annotated keyphrases.
You can find more information in the [paper](https://arxiv.org/abs/1911.02671).
## 👷♂️ Training Procedure
For more in detail information, you can take a look at the [training notebook]().
### Training Parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 1e-4 |
| Epochs | 50 |
| Early Stopping Patience | 3 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding labels. The only thing that must be done is tokenization and the realignment of the labels so that they correspond with the right subword tokens.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Labels
label_list = ["B", "I", "O"]
lbl2idx = {"B": 0, "I": 1, "O": 2}
idx2label = {0: "B", 1: "I", 2: "O"}
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR")
max_length = 512
# Dataset parameters
dataset_full_name = "midas/openkp"
dataset_subset = "raw"
dataset_document_column = "document"
dataset_biotags_column = "doc_bio_tags"
def preprocess_fuction(all_samples_per_split):
tokenized_samples = tokenizer.batch_encode_plus(
all_samples_per_split[dataset_document_column],
padding="max_length",
truncation=True,
is_split_into_words=True,
max_length=max_length,
)
total_adjusted_labels = []
for k in range(0, len(tokenized_samples["input_ids"])):
prev_wid = -1
word_ids_list = tokenized_samples.word_ids(batch_index=k)
existing_label_ids = all_samples_per_split[dataset_biotags_column][k]
i = -1
adjusted_label_ids = []
for wid in word_ids_list:
if wid is None:
adjusted_label_ids.append(lbl2idx["O"])
elif wid != prev_wid:
i = i + 1
adjusted_label_ids.append(lbl2idx[existing_label_ids[i]])
prev_wid = wid
else:
adjusted_label_ids.append(
lbl2idx[
f"{'I' if existing_label_ids[i] == 'B' else existing_label_ids[i]}"
]
)
total_adjusted_labels.append(adjusted_label_ids)
tokenized_samples["labels"] = total_adjusted_labels
return tokenized_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing (Without Pipeline Function)
If you do not use the pipeline function, you must filter out the B and I labeled tokens. Each B and I will then be merged into a keyphrase. Finally, you need to strip the keyphrases to make sure all unnecessary spaces have been removed.
```python
# Define post_process functions
def concat_tokens_by_tag(keyphrases):
keyphrase_tokens = []
for id, label in keyphrases:
if label == "B":
keyphrase_tokens.append([id])
elif label == "I":
if len(keyphrase_tokens) > 0:
keyphrase_tokens[len(keyphrase_tokens) - 1].append(id)
return keyphrase_tokens
def extract_keyphrases(example, predictions, tokenizer, index=0):
keyphrases_list = [
(id, idx2label[label])
for id, label in zip(
np.array(example["input_ids"]).squeeze().tolist(), predictions[index]
)
if idx2label[label] in ["B", "I"]
]
processed_keyphrases = concat_tokens_by_tag(keyphrases_list)
extracted_kps = tokenizer.batch_decode(
processed_keyphrases,
skip_special_tokens=True,
clean_up_tokenization_spaces=True,
)
return np.unique([kp.strip() for kp in extracted_kps])
```
## 📝 Evaluation Results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases.
The model achieves the following results on the OpenKP test set:
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|
| OpenKP Test Set | 0.13 | 0.38 | 0.19 | 0.07 | 0.38 | 0.11 | 0.45 | 0.38 | 0.39 |
For more information on the evaluation process, you can take a look at the keyphrase extraction evaluation notebook.
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
hakurei/litv2-6B-rev3 | b3149de232b8f398645015eaef23c2dd886f244b | 2022-06-17T04:54:20.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
] | text-generation | false | hakurei | null | hakurei/litv2-6B-rev3 | 88 | null | transformers | 4,842 | https://wandb.ai/haruu/mesh-transformer-jax/runs/1iae931p?workspace=user-haruu |
davidcechak/DNADebertaK8 | e34c774d03fc1c65d877d21134f86a5c58ac35e9 | 2022-07-05T22:54:48.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | davidcechak | null | davidcechak/DNADebertaK8 | 88 | null | transformers | 4,843 | Entry not found |
CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | a5482ef3f4cf508d383ef4a3cc7ce40cfa12722b | 2021-10-17T11:15:12.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment | 87 | 2 | transformers | 4,844 | ---
language:
- ar
license: apache-2.0
widget:
- text: "أنا بخير"
---
# CAMeLBERT-CA SA Model
## Model description
**CAMeLBERT-CA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model.
For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."
* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-CA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline.
#### How to use
To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component:
```python
>>> from camel_tools.sentiment import SentimentAnalyzer
>>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment")
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa.predict(sentences)
>>> ['positive', 'negative']
```
You can also use the SA model directly with a transformers pipeline:
```python
>>> from transformers import pipeline
e
>>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-sentiment')
>>> sentences = ['أنا بخير', 'أنا لست بخير']
>>> sa(sentences)
[{'label': 'positive', 'score': 0.9616648554801941},
{'label': 'negative', 'score': 0.9779177904129028}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Helsinki-NLP/opus-mt-fr-pl | e7d23016ef0cf42510b2a7701e36b8b08b96399c | 2021-09-09T21:56:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"pl",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-pl | 87 | null | transformers | 4,845 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fr-pl
* source languages: fr
* target languages: pl
* OPUS readme: [fr-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.pl | 40.7 | 0.625 |
|
Helsinki-NLP/opus-mt-ja-pt | 654bbe699ed24805d8f5155246b278b3fa65acb9 | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-pt | 87 | null | transformers | 4,846 | ---
language:
- ja
- pt
tags:
- translation
license: apache-2.0
---
### jpn-por
* source group: Japanese
* target group: Portuguese
* OPUS readme: [jpn-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-por/README.md)
* model: transformer-align
* source language(s): jpn jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii
* target language(s): por por_Hira
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.por | 22.2 | 0.444 |
### System Info:
- hf_name: jpn-por
- source_languages: jpn
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'pt']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: por
- short_pair: ja-pt
- chrF2_score: 0.444
- bleu: 22.2
- brevity_penalty: 0.922
- ref_len: 15570.0
- src_name: Japanese
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: pt
- prefer_old: False
- long_pair: jpn-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
NLP4H/ms_bert | 7883a03983cc71f12af68e838fad140865cbf97f | 2021-05-18T21:46:48.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | NLP4H | null | NLP4H/ms_bert | 87 | null | transformers | 4,847 | # MS-BERT
## Introduction
This repository provides codes and models of MS-BERT.
MS-BERT was pre-trained on notes from neurological examination for Multiple Sclerosis (MS) patients at St. Michael's Hospital in Toronto, Canada.
## Data
The dataset contained approximately 75,000 clinical notes, for about 5000 patients, totaling to over 35.7 million words.
These notes were collected from patients who visited St. Michael's Hospital MS Clinic between 2015 to 2019.
The notes contained a variety of information pertaining to a neurological exam.
For example, a note can contain information on the patient's condition, their progress over time and diagnosis.
The gender split within the dataset was observed to be 72% female and 28% male ([which reflects the natural discrepancy seen in MS][1]).
Further sections will describe how MS-BERT was pre trained through the use of these clinically relevant and rich neurological notes.
## Data pre-processing
The data was pre-processed to remove any identifying information. This includes information on: patient names, doctor names, hospital names, patient identification numbers, phone numbers, addresses, and time. In order to de-identify the information, we used a curated database that contained patient and doctor information. This curated database was paired with regular expressions to find and remove any identifying pieces of information. Each of these identifiers were replaced with a specific token. These tokens were chosen based on three criteria: (1) they belong to the current BERT vocab, (2), they have relatively the same semantic meaning as the word they are replacing, and (3), the token is not found in the original unprocessed dataset. The replacements that met the criteria above were as follows:
Female first names -> Lucie
Male first names -> Ezekiel
Last/family names -> Salamanca.
Dates -> 2010s
Patient IDs -> 999
Phone numbers -> 1718
Addresses -> Silesia
Time -> 1610
Locations/Hospital/Clinic names -> Troy
## Pre-training
The starting point for our model is the already pre-trained and fine-tuned BLUE-BERT base. We further pre-train it using the masked language modelling task from the huggingface transformers [library](https://github.com/huggingface).
The hyperparameters can be found in the config file in this repository or [here](https://s3.amazonaws.com/models.huggingface.co/bert/NLP4H/ms_bert/config.json)
## Acknowledgements
We would like to thank the researchers and staff at the Data Science and Advanced Analytics (DSAA) department, St. Michael’s Hospital, for providing consistent support and guidance throughout this project.
We would also like to thank Dr. Marzyeh Ghassemi, Taylor Killan, Nathan Ng and Haoran Zhang for providing us the opportunity to work on this exciting project.
## Disclaimer
MS-BERT shows the results of research conducted at the Data Science and Advanced Analytics (DSAA) department, St. Michael’s Hospital. The results produced by MS-BERT are not intended for direct diagnostic use or medical decision-making without review and oversight by a clinical professional. Individuals should not make decisions about their health solely on the basis of the results produced by MS-BERT. St. Michael’s Hospital does not independently verify the validity or utility of the results produced by MS-BERT. If you have questions about the results produced by MS-BERT please consult a healthcare professional. If you would like more information about the research conducted at DSAA please contact [Zhen Yang](mailto:[email protected]). If you would like more information on neurological examination notes please contact [Dr. Tony Antoniou](mailto:[email protected]) or [Dr. Jiwon Oh](mailto:[email protected]) from the MS clinic at St. Michael's Hospital.
[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3707353/
|
NbAiLab/nb-bert-base-ner | b925de07fe9e9dc55c20cd72364b791aef42e2d0 | 2022-06-01T10:49:24.000Z | [
"pytorch",
"bert",
"token-classification",
"no",
"dataset:norne",
"transformers",
"norwegian",
"ner",
"license:cc-by-4.0",
"autotrain_compatible"
] | token-classification | false | NbAiLab | null | NbAiLab/nb-bert-base-ner | 87 | null | transformers | 4,848 | ---
language: no
license: cc-by-4.0
tags:
- norwegian
- bert
- ner
thumbnail: nblogo_3.png
pipeline_tag: token-classification
datasets:
- norne
inference:
parameters:
aggregation_strategy: "first"
widget:
- text: Trond Giske har bekreftet på spørsmål fra Adresseavisen at Hansen leide et rom i hans leilighet i Trondheim.
---
**Release 1.0** (November 17, 2021)
# nb-bert-base-ner
## Description
NB-Bert base model fine-tuned on the Named Entity Recognition task using the [NorNE dataset](https://huggingface.co/datasets/NbAiLab/norne).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-bert-base-ner")
model = AutoModelForTokenClassification.from_pretrained("NbAiLab/nb-bert-base-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Jeg heter Kjell og bor i Oslo."
ner_results = nlp(example)
print(ner_results)
``` |
Saz/DialoGPT-small-saz | 10167fd59643d1389903b0f470c64bae721de24c | 2021-10-08T06:08:56.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Saz | null | Saz/DialoGPT-small-saz | 87 | null | transformers | 4,849 | ---
tags:
- conversational
---
# Saz DialoGPT Model |
UBC-NLP/AraT5-tweet-base | 8926762f3fdf3661c8b632f368e1a5f5b4c5a1ab | 2022-05-26T18:26:55.000Z | [
"pytorch",
"tf",
"t5",
"ar",
"transformers",
"Arabic T5",
"MSA",
"Twitter",
"Arabic Dialect",
"Arabic Machine Translation",
"Arabic Text Summarization",
"Arabic News Title and Question Generation",
"Arabic Paraphrasing and Transliteration",
"Arabic Code-Switched Translation"
] | null | false | UBC-NLP | null | UBC-NLP/AraT5-tweet-base | 87 | 1 | transformers | 4,850 | ---
language:
- ar
tags:
- Arabic T5
- MSA
- Twitter
- Arabic Dialect
- Arabic Machine Translation
- Arabic Text Summarization
- Arabic News Title and Question Generation
- Arabic Paraphrasing and Transliteration
- Arabic Code-Switched Translation
---
# AraT5-base
# AraT5: Text-to-Text Transformers for Arabic Language Generation
<img src="https://huggingface.co/UBC-NLP/AraT5-base/resolve/main/AraT5_CR_new.png" alt="AraT5" width="45%" height="35%" align="right"/>
This is the repository accompanying our paper [AraT5: Text-to-Text Transformers for Arabic Language Understanding and Generation](https://aclanthology.org/2022.acl-long.47/). In this is the repository we Introduce **AraT5<sub>MSA</sub>**, **AraT5<sub>Tweet</sub>**, and **AraT5**: three powerful Arabic-specific text-to-text Transformer based models;
---
# How to use AraT5 models
Below is an example for fine-tuning **AraT5-base** for News Title Generation on the Aranews dataset
``` bash
!python run_trainier_seq2seq_huggingface.py \
--learning_rate 5e-5 \
--max_target_length 128 --max_source_length 128 \
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 \
--model_name_or_path "UBC-NLP/AraT5-base" \
--output_dir "/content/AraT5_FT_title_generation" --overwrite_output_dir \
--num_train_epochs 3 \
--train_file "/content/ARGEn_title_genration_sample_train.tsv" \
--validation_file "/content/ARGEn_title_genration_sample_valid.tsv" \
--task "title_generation" --text_column "document" --summary_column "title" \
--load_best_model_at_end --metric_for_best_model "eval_bleu" --greater_is_better True --evaluation_strategy epoch --logging_strategy epoch --predict_with_generate\
--do_train --do_eval
```
For more details about the fine-tuning example, please read this notebook [](https://github.com/UBC-NLP/araT5/blob/main/examples/Fine_tuning_AraT5.ipynb)
In addition, we release the fine-tuned checkpoint of the News Title Generation (NGT) which is described in the paper. The model available at Huggingface ([UBC-NLP/AraT5-base-title-generation](https://huggingface.co/UBC-NLP/AraT5-base-title-generation)).
For more details, please visit our own [GitHub](https://github.com/UBC-NLP/araT5).
# AraT5 Models Checkpoints
AraT5 Pytorch and TensorFlow checkpoints are available on the Huggingface website for direct download and use ```exclusively for research```. ```For commercial use, please contact the authors via email @ (muhammad.mageed[at]ubc[dot]ca).```
| **Model** | **Link** |
|---------|:------------------:|
| **AraT5-base** | [https://huggingface.co/UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) |
| **AraT5-msa-base** | [https://huggingface.co/UBC-NLP/AraT5-msa-base](https://huggingface.co/UBC-NLP/AraT5-msa-base) |
| **AraT5-tweet-base** | [https://huggingface.co/UBC-NLP/AraT5-tweet-base](https://huggingface.co/UBC-NLP/AraT5-tweet-base) |
| **AraT5-msa-small** | [https://huggingface.co/UBC-NLP/AraT5-msa-small](https://huggingface.co/UBC-NLP/AraT5-msa-small) |
| **AraT5-tweet-small**| [https://huggingface.co/UBC-NLP/AraT5-tweet-small](https://huggingface.co/UBC-NLP/AraT5-tweet-small) |
# BibTex
If you use our models (Arat5-base, Arat5-msa-base, Arat5-tweet-base, Arat5-msa-small, or Arat5-tweet-small ) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{nagoudi-etal-2022-arat5,
title = "{A}ra{T}5: Text-to-Text Transformers for {A}rabic Language Generation",
author = "Nagoudi, El Moatez Billah and
Elmadany, AbdelRahim and
Abdul-Mageed, Muhammad",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.47",
pages = "628--647",
abstract = "Transfer learning with a unified Transformer framework (T5) that converts all language problems into a text-to-text format was recently proposed as a simple and effective transfer learning approach. Although a multilingual version of the T5 model (mT5) was also introduced, it is not clear how well it can fare on non-English tasks involving diverse data. To investigate this question, we apply mT5 on a language with a wide variety of dialects{--}Arabic. For evaluation, we introduce a novel benchmark for ARabic language GENeration (ARGEN), covering seven important tasks. For model comparison, we pre-train three powerful Arabic T5-style models and evaluate them on ARGEN. Although pre-trained with {\textasciitilde}49 less data, our new models perform significantly better than mT5 on all ARGEN tasks (in 52 out of 59 test sets) and set several new SOTAs. Our models also establish new SOTA on the recently-proposed, large Arabic language understanding evaluation benchmark ARLUE (Abdul-Mageed et al., 2021). Our new models are publicly available. We also link to ARGEN datasets through our repository: https://github.com/UBC-NLP/araT5.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
facebook/detr-resnet-50-dc5 | e3ea9b76022e0df3be936441d20ee5470886f557 | 2022-06-27T08:36:56.000Z | [
"pytorch",
"detr",
"object-detection",
"dataset:coco",
"arxiv:2005.12872",
"transformers",
"vision",
"license:apache-2.0"
] | object-detection | false | facebook | null | facebook/detr-resnet-50-dc5 | 87 | 1 | transformers | 4,851 | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50-dc5')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves an AP (average precision) of **43.3** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
franklu/pubmed_bert_squadv2 | 3abeb3561a41c7960a4b24b4a9629d06d9f2254b | 2021-07-09T05:25:26.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | franklu | null | franklu/pubmed_bert_squadv2 | 87 | null | transformers | 4,852 | **[`microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext`](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py)**
Tunning script:
```bash
BASE_MODEL=microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
OUTPUT_DIR=~/Documents/projects/tunned_models/ms_pubmed_bert_squadv2/
python run_qa.py \
--model_name_or_path $BASE_MODEL\
--dataset_name squad_v2 \
--do_train \
--do_eval \
--version_2_with_negative \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir $OUTPUT_DIR
``` |
fspanda/Medical-Bio-BERT2 | f3fe25c06099883d3116d1395afa8df37fa0cb93 | 2021-05-19T16:57:41.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | fspanda | null | fspanda/Medical-Bio-BERT2 | 87 | null | transformers | 4,853 | Entry not found |
google/t5-large-ssm | 0b0101c39a17cb38660e671692bdd892ba5d352d | 2021-06-23T01:49:33.000Z | [
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:wikipedia",
"arxiv:2002.08909",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | google | null | google/t5-large-ssm | 87 | 1 | transformers | 4,854 | ---
language: en
datasets:
- c4
- wikipedia
license: apache-2.0
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**.
The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4) and subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.pdf)'s salient span masking objective on [Wikipedia](https://huggingface.co/datasets/wikipedia).
**Note**: This model should be fine-tuned on a question answering downstream task before it is useable for closed book question answering.
Other Community Checkpoints: [here](https://huggingface.co/models?search=ssm)
Paper: [How Much Knowledge Can You Pack
Into the Parameters of a Language Model?](https://arxiv.org/abs/1910.10683.pdf)
Authors: *Adam Roberts, Colin Raffel, Noam Shazeer*
## Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
 |
illuin/camembert-base-fquad | 8fea54f3caae6eef44ee1d1c52026ac48752a6c0 | 2020-12-11T21:45:27.000Z | [
"pytorch",
"camembert",
"question-answering",
"fr",
"dataset:fquad",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | question-answering | false | illuin | null | illuin/camembert-base-fquad | 87 | 3 | transformers | 4,855 | ---
language: fr
tags:
- question-answering
- camembert
license: gpl-3.0
datasets:
- fquad
---
# camembert-base-fquad
## Description
A native French Question Answering model [CamemBERT-base](https://camembert-model.fr/) fine-tuned on [FQuAD](https://fquad.illuin.tech/).
## Evaluation results
On the development set.
```shell
{"f1": 88.1, "exact_match": 78.1}
```
On the test set.
```shell
{"f1": 88.3, "exact_match": 78.0}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='illuin/camembert-base-fquad', tokenizer='illuin/camembert-base-fquad')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Citation
If you use our work, please cite:
```bibtex
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
|
mrm8488/layoutlm-finetuned-funsd | 3de04e17d2fb21729336dea31651b573e5e3c33a | 2021-08-01T16:39:26.000Z | [
"pytorch",
"layoutlm",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | mrm8488 | null | mrm8488/layoutlm-finetuned-funsd | 87 | null | transformers | 4,856 | # LayoutLM fine-tuned on FUNSD for Document/Forms token classification
## Usage (WIP)
```python
import torch
import numpy as np
from PIL import Image, ImageDraw, ImageFont
import pytesseract
from transformers import LayoutLMForTokenClassification, LayoutLMTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = LayoutLMTokenizer.from_pretrained("mrm8488/layoutlm-finetuned-funsd")
model = LayoutLMForTokenClassification.from_pretrained("mrm8488/layoutlm-finetuned-funsd", num_labels=13)
model.to(device)
image = Image.open("/83443897.png")
image = image.convert("RGB")
# Display the image
# Run Tesseract (OCR) on the image
width, height = image.size
w_scale = 1000/width
h_scale = 1000/height
ocr_df = pytesseract.image_to_data(image, output_type='data.frame') \\n
ocr_df = ocr_df.dropna() \\n .assign(left_scaled = ocr_df.left*w_scale,
width_scaled = ocr_df.width*w_scale,
top_scaled = ocr_df.top*h_scale,
height_scaled = ocr_df.height*h_scale,
right_scaled = lambda x: x.left_scaled + x.width_scaled,
bottom_scaled = lambda x: x.top_scaled + x.height_scaled)
float_cols = ocr_df.select_dtypes('float').columns
ocr_df[float_cols] = ocr_df[float_cols].round(0).astype(int)
ocr_df = ocr_df.replace(r'^\s*$', np.nan, regex=True)
ocr_df = ocr_df.dropna().reset_index(drop=True)
ocr_df[:20]
# create a list of words, actual bounding boxes, and normalized boxes
words = list(ocr_df.text)
coordinates = ocr_df[['left', 'top', 'width', 'height']]
actual_boxes = []
for idx, row in coordinates.iterrows():
x, y, w, h = tuple(row) # the row comes in (left, top, width, height) format
actual_box = [x, y, x+w, y+h] # we turn it into (left, top, left+widght, top+height) to get the actual box
actual_boxes.append(actual_box)
def normalize_box(box, width, height):
return [
int(1000 * (box[0] / width)),
int(1000 * (box[1] / height)),
int(1000 * (box[2] / width)),
int(1000 * (box[3] / height)),
]
boxes = []
for box in actual_boxes:
boxes.append(normalize_box(box, width, height))
# Display boxes
def convert_example_to_features(image, words, boxes, actual_boxes, tokenizer, args, cls_token_box=[0, 0, 0, 0],
sep_token_box=[1000, 1000, 1000, 1000],
pad_token_box=[0, 0, 0, 0]):
width, height = image.size
tokens = []
token_boxes = []
actual_bboxes = [] # we use an extra b because actual_boxes is already used
token_actual_boxes = []
for word, box, actual_bbox in zip(words, boxes, actual_boxes):
word_tokens = tokenizer.tokenize(word)
tokens.extend(word_tokens)
token_boxes.extend([box] * len(word_tokens))
actual_bboxes.extend([actual_bbox] * len(word_tokens))
token_actual_boxes.extend([actual_bbox] * len(word_tokens))
# Truncation: account for [CLS] and [SEP] with "- 2".
special_tokens_count = 2
if len(tokens) > args.max_seq_length - special_tokens_count:
tokens = tokens[: (args.max_seq_length - special_tokens_count)]
token_boxes = token_boxes[: (args.max_seq_length - special_tokens_count)]
actual_bboxes = actual_bboxes[: (args.max_seq_length - special_tokens_count)]
token_actual_boxes = token_actual_boxes[: (args.max_seq_length - special_tokens_count)]
# add [SEP] token, with corresponding token boxes and actual boxes
tokens += [tokenizer.sep_token]
token_boxes += [sep_token_box]
actual_bboxes += [[0, 0, width, height]]
token_actual_boxes += [[0, 0, width, height]]
segment_ids = [0] * len(tokens)
# next: [CLS] token
tokens = [tokenizer.cls_token] + tokens
token_boxes = [cls_token_box] + token_boxes
actual_bboxes = [[0, 0, width, height]] + actual_bboxes
token_actual_boxes = [[0, 0, width, height]] + token_actual_boxes
segment_ids = [1] + segment_ids
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding_length = args.max_seq_length - len(input_ids)
input_ids += [tokenizer.pad_token_id] * padding_length
input_mask += [0] * padding_length
segment_ids += [tokenizer.pad_token_id] * padding_length
token_boxes += [pad_token_box] * padding_length
token_actual_boxes += [pad_token_box] * padding_length
assert len(input_ids) == args.max_seq_length
assert len(input_mask) == args.max_seq_length
assert len(segment_ids) == args.max_seq_length
assert len(token_boxes) == args.max_seq_length
assert len(token_actual_boxes) == args.max_seq_length
return input_ids, input_mask, segment_ids, token_boxes, token_actual_boxes
input_ids, input_mask, segment_ids, token_boxes, token_actual_boxes = convert_example_to_features(image=image, words=words, boxes=boxes, actual_boxes=actual_boxes, tokenizer=tokenizer, args=args)
input_ids = torch.tensor(input_ids, device=device).unsqueeze(0)
attention_mask = torch.tensor(input_mask, device=device).unsqueeze(0)
token_type_ids = torch.tensor(segment_ids, device=device).unsqueeze(0)
bbox = torch.tensor(token_boxes, device=device).unsqueeze(0)
outputs = model(input_ids=input_ids, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids)
token_predictions = outputs.logits.argmax(-1).squeeze().tolist() # the predictions are at the token level
word_level_predictions = [] # let's turn them into word level predictions
final_boxes = []
for id, token_pred, box in zip(input_ids.squeeze().tolist(), token_predictions, token_actual_boxes):
if (tokenizer.decode([id]).startswith("##")) or (id in [tokenizer.cls_token_id,
tokenizer.sep_token_id,
tokenizer.pad_token_id]):
# skip prediction + bounding box
continue
else:
word_level_predictions.append(token_pred)
final_boxes.append(box)
#print(word_level_predictions)
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
def iob_to_label(label):
if label != 'O':
return label[2:]
else:
return "other"
label2color = {'question':'blue', 'answer':'green', 'header':'orange', 'other':'violet'}
for prediction, box in zip(word_level_predictions, final_boxes):
predicted_label = iob_to_label(label_map[prediction]).lower()
draw.rectangle(box, outline=label2color[predicted_label])
draw.text((box[0] + 10, box[1] - 10), text=predicted_label, fill=label2color[predicted_label], font=font)
# Display the result (image)
``` |
mrm8488/legalectra-small-spanish | 4854a72623057dcf56e51f7efb3a0cb15398388e | 2022-03-30T21:06:31.000Z | [
"pytorch",
"electra",
"pretraining",
"es",
"dataset:Spanish-legal-corpora",
"transformers",
"Spanish",
"Electra",
"Legal"
] | null | false | mrm8488 | null | mrm8488/legalectra-small-spanish | 87 | 2 | transformers | 4,857 | ---
language: es
tags:
- Spanish
- Electra
- Legal
datasets:
- Spanish-legal-corpora
---
## LEGALECTRA ⚖️
**LEGALECTRA** (small) is an Electra like model (discriminator in this case) trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
The model was trained using the Electra base code for 3 days on 1 Tesla V100 16GB.
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 256 |
|Params| 14M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.955|
|Precision| 0.790|
|AUC | 0.971|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
TBA
## Acknowledgments
TBA
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022legalectra,
title={Spanish Legal Electra (small)},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/legalectra-small-spanish},
year={2022}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
nateraw/test_model_a | 25e32086060f76efb5194dcb2db351b90ebf5981 | 2021-07-13T04:52:00.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:image_folder",
"transformers",
"generated_from_trainer"
] | image-classification | false | nateraw | null | nateraw/test_model_a | 87 | null | transformers | 4,858 | ---
tags:
- generated_from_trainer
datasets:
- image_folder
model_index:
- name: test_model_a
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model_a
This model is a fine-tuned version of [lysandre/tiny-vit-random](https://huggingface.co/lysandre/tiny-vit-random) on the image_folder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 40
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.1.dev0
- Tokenizers 0.10.3
|
mustapha/flipped-image-ViT | 3c76c3aa2325d0d5678c64e7b536ead6903e64b1 | 2022-03-31T12:30:19.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | mustapha | null | mustapha/flipped-image-ViT | 87 | 1 | transformers | 4,859 | Hello world,
This model have been created in the context of ` Fatima Fellowship Programme`. The model was trained on the Cifar10 dataset with a googd final accuracy of arround 98%.
This model determines wether an image is flipped of not. |
moshew/bert-mini-sst2-distilled | 61728bcf9f705e4161ee2be3185bfb48f7a1c617 | 2022-04-13T11:33:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | moshew | null | moshew/bert-mini-sst2-distilled | 87 | null | transformers | 4,860 | Entry not found |
tinkoff-ai/response-quality-classifier-base | 4d3c0a32337d670038becdbd0478e85012795a3c | 2022-06-01T06:34:22.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"transformers",
"conversational",
"license:mit"
] | text-classification | false | tinkoff-ai | null | tinkoff-ai/response-quality-classifier-base | 87 | null | transformers | 4,861 | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [DeepPavlov/rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.49 | 0.84 | 0.79 |
| specificity | 0.53 | 0.83 | 0.83 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-base')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-base')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). |
Theivaprakasham/layoutlmv3-finetuned-sroie | 6c8e3d5dcdf9e36ca53ad490f93714c44bbce3a3 | 2022-06-07T18:08:04.000Z | [
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"dataset:sroie",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | Theivaprakasham | null | Theivaprakasham/layoutlmv3-finetuned-sroie | 87 | null | transformers | 4,862 | ---
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-sroie
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
args: sroie
metrics:
- name: Precision
type: precision
value: 0.9370529327610873
- name: Recall
type: recall
value: 0.9438040345821326
- name: F1
type: f1
value: 0.9404163675520459
- name: Accuracy
type: accuracy
value: 0.9945347083116948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-sroie
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0426
- Precision: 0.9371
- Recall: 0.9438
- F1: 0.9404
- Accuracy: 0.9945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.32 | 100 | 0.1127 | 0.6466 | 0.6102 | 0.6279 | 0.9729 |
| No log | 0.64 | 200 | 0.0663 | 0.8215 | 0.7428 | 0.7802 | 0.9821 |
| No log | 0.96 | 300 | 0.0563 | 0.8051 | 0.8718 | 0.8371 | 0.9855 |
| No log | 1.28 | 400 | 0.0470 | 0.8766 | 0.8595 | 0.8680 | 0.9895 |
| 0.1328 | 1.6 | 500 | 0.0419 | 0.8613 | 0.9128 | 0.8863 | 0.9906 |
| 0.1328 | 1.92 | 600 | 0.0338 | 0.8888 | 0.9099 | 0.8993 | 0.9926 |
| 0.1328 | 2.24 | 700 | 0.0320 | 0.8690 | 0.9467 | 0.9062 | 0.9929 |
| 0.1328 | 2.56 | 800 | 0.0348 | 0.8960 | 0.9438 | 0.9193 | 0.9931 |
| 0.1328 | 2.88 | 900 | 0.0300 | 0.9169 | 0.9460 | 0.9312 | 0.9942 |
| 0.029 | 3.19 | 1000 | 0.0281 | 0.9080 | 0.9452 | 0.9262 | 0.9942 |
| 0.029 | 3.51 | 1100 | 0.0259 | 0.9174 | 0.9438 | 0.9304 | 0.9945 |
| 0.029 | 3.83 | 1200 | 0.0309 | 0.9207 | 0.9532 | 0.9366 | 0.9944 |
| 0.029 | 4.15 | 1300 | 0.0366 | 0.9195 | 0.9388 | 0.9291 | 0.9940 |
| 0.029 | 4.47 | 1400 | 0.0302 | 0.9343 | 0.9424 | 0.9383 | 0.9949 |
| 0.0174 | 4.79 | 1500 | 0.0349 | 0.9142 | 0.9517 | 0.9326 | 0.9939 |
| 0.0174 | 5.11 | 1600 | 0.0327 | 0.9322 | 0.9510 | 0.9415 | 0.9950 |
| 0.0174 | 5.43 | 1700 | 0.0317 | 0.9215 | 0.9561 | 0.9385 | 0.9938 |
| 0.0174 | 5.75 | 1800 | 0.0385 | 0.9282 | 0.9316 | 0.9299 | 0.9940 |
| 0.0174 | 6.07 | 1900 | 0.0342 | 0.9235 | 0.9481 | 0.9357 | 0.9944 |
| 0.0117 | 6.39 | 2000 | 0.0344 | 0.9287 | 0.9474 | 0.9379 | 0.9944 |
| 0.0117 | 6.71 | 2100 | 0.0388 | 0.9232 | 0.9445 | 0.9338 | 0.9941 |
| 0.0117 | 7.03 | 2200 | 0.0325 | 0.9269 | 0.9496 | 0.9381 | 0.9949 |
| 0.0117 | 7.35 | 2300 | 0.0343 | 0.9225 | 0.9438 | 0.9330 | 0.9941 |
| 0.0117 | 7.67 | 2400 | 0.0372 | 0.9216 | 0.9481 | 0.9347 | 0.9944 |
| 0.0081 | 7.99 | 2500 | 0.0385 | 0.9192 | 0.9589 | 0.9386 | 0.9944 |
| 0.0081 | 8.31 | 2600 | 0.0376 | 0.9293 | 0.9467 | 0.9379 | 0.9944 |
| 0.0081 | 8.63 | 2700 | 0.0425 | 0.9261 | 0.9474 | 0.9366 | 0.9941 |
| 0.0081 | 8.95 | 2800 | 0.0407 | 0.9266 | 0.9452 | 0.9358 | 0.9941 |
| 0.0081 | 9.27 | 2900 | 0.0403 | 0.9280 | 0.9467 | 0.9372 | 0.9941 |
| 0.0055 | 9.58 | 3000 | 0.0364 | 0.9287 | 0.9474 | 0.9379 | 0.9948 |
| 0.0055 | 9.9 | 3100 | 0.0427 | 0.9122 | 0.9510 | 0.9312 | 0.9941 |
| 0.0055 | 10.22 | 3200 | 0.0394 | 0.9223 | 0.9488 | 0.9354 | 0.9943 |
| 0.0055 | 10.54 | 3300 | 0.0393 | 0.9247 | 0.9561 | 0.9401 | 0.9945 |
| 0.0055 | 10.86 | 3400 | 0.0413 | 0.9334 | 0.9496 | 0.9414 | 0.9945 |
| 0.0049 | 11.18 | 3500 | 0.0400 | 0.9290 | 0.9517 | 0.9402 | 0.9945 |
| 0.0049 | 11.5 | 3600 | 0.0412 | 0.9317 | 0.9539 | 0.9427 | 0.9945 |
| 0.0049 | 11.82 | 3700 | 0.0419 | 0.9314 | 0.9481 | 0.9397 | 0.9947 |
| 0.0049 | 12.14 | 3800 | 0.0452 | 0.9243 | 0.9503 | 0.9371 | 0.9941 |
| 0.0049 | 12.46 | 3900 | 0.0412 | 0.9334 | 0.9496 | 0.9414 | 0.9947 |
| 0.0039 | 12.78 | 4000 | 0.0438 | 0.9294 | 0.9481 | 0.9387 | 0.9941 |
| 0.0039 | 13.1 | 4100 | 0.0416 | 0.9326 | 0.9467 | 0.9396 | 0.9944 |
| 0.0039 | 13.42 | 4200 | 0.0418 | 0.9327 | 0.9488 | 0.9407 | 0.9948 |
| 0.0039 | 13.74 | 4300 | 0.0423 | 0.9345 | 0.9460 | 0.9402 | 0.9946 |
| 0.0039 | 14.06 | 4400 | 0.0419 | 0.9286 | 0.9467 | 0.9376 | 0.9947 |
| 0.0022 | 14.38 | 4500 | 0.0426 | 0.9371 | 0.9438 | 0.9404 | 0.9945 |
| 0.0022 | 14.7 | 4600 | 0.0424 | 0.9371 | 0.9445 | 0.9408 | 0.9947 |
| 0.0022 | 15.02 | 4700 | 0.0427 | 0.9372 | 0.9467 | 0.9419 | 0.9947 |
| 0.0022 | 15.34 | 4800 | 0.0431 | 0.9339 | 0.9460 | 0.9399 | 0.9945 |
| 0.0022 | 15.65 | 4900 | 0.0431 | 0.9346 | 0.9467 | 0.9406 | 0.9946 |
| 0.0015 | 15.97 | 5000 | 0.0434 | 0.9324 | 0.9445 | 0.9384 | 0.9945 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
crumb/gpt-j-6b-shakespeare | eee121823da348a498f0f544c6afdb34c7880923 | 2022-07-20T18:06:37.000Z | [
"pytorch",
"gptj",
"text-generation",
"en",
"dataset:The Pile",
"dataset:tiny_shakespeare",
"arxiv:2101.00027",
"transformers",
"causal-lm"
] | text-generation | false | crumb | null | crumb/gpt-j-6b-shakespeare | 87 | null | transformers | 4,863 | ---
language:
- en
tags:
- pytorch
- causal-lm
datasets:
- The Pile
- tiny_shakespeare
inference: false
---
# GPT-J 6b Shakespeare
<p style="color:green"> <b> 1.) The "Hosted inference API" is turned off. Go to the <a href="https://huggingface.co/crumb/gpt-j-6b-shakespeare#how-to-use">How to Use</a> section <br>
2.) This is a "proof of concept" and not fully trained, simple training script also in "How to Use" section. </b>
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
This checkpoint is a finetuned version of the original [GPT-J 6b](https://huggingface.co/EleutherAI/gpt-j-6B) on [tiny_shakespeare](https://huggingface.co/datasets/tiny_shakespeare)
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
This checkpoint was afterwards finetuned on [tiny_shakespeare](https://huggingface.co/datasets/tiny_shakespeare) by [crumb](https://huggingface.co/crumb) (me)
> 40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
## Training Procedure
| Parameter | Value |
|----------------------|------------|
| epochs | 1 |
| learning rate | .002 |
| weight decay | .01 |
| batch size | 8 |
| context length (tokens) | 256 |
Trained on 1 Tesla T4 from [google colab](https://colab.research.google.com/)
```TrainOutput(global_step=147, training_loss=1.665000240818984, metrics={'train_runtime': 2828.7347, 'train_samples_per_second': 0.417, 'train_steps_per_second': 0.052, 'total_flos': 1555992281088.0, 'train_loss': 1.665000240818984, 'epoch': 1.0})```
A good starting point to finetune your own gpt-j-6b would be [hivemind's 8bit training code](https://huggingface.co/hivemind/gpt-j-6B-8bit), or with the notebook in [this repository](https://github.com/aicrumb/gpt-j-8bit) which you can download and open in [google colab](https://colab.research.google.com/) or any other ipynb service
No LORA adapters were used for the sake of easy loading and inference with 🤗. Only Linear biases and LayerNorm scales were passed to the optimizer.
## Intended Use and Limitations
(same as [gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6B))
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
```python
# libraries and a wrapper around hivemind's quantization code
!pip install transformers==4.14.1 bitsandbytes-cuda111==0.26.0 git+https://github.com/aicrumb/transformers-8bit -q
import transformers_8bit
model, tokenizer, config = transformers_8bit.load_gptj("crumb/gpt-j-6b-shakespeare", device='cuda')
prompt = tokenizer("Romeo:", return_tensors='pt')
prompt = {key: value.to('cuda') for key, value in prompt.items()}
out = model.generate(**prompt, min_length=64, max_length=64, do_sample=True, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(out[0]))
""" example output
Romeo: [Aside] And but in night, how tedious
Is the day's celebration!
JULIET: [Aside] O me! how quick skips time!
Bid Time himself look out And, after no long date,
Call time up o'er-head,
"""
```
### Limitations and Biases
(same as [gpt-j-6b](https://huggingface.co/EleutherAI/gpt-j-6B))
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## To do:
- clean up training code & create github repo for training related models
- see if converting to fp16 or fp32 fixes the inference on the card
## Citations and Related Information
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
```bibtex
@misc{
author={Karpathy, Andrej},
title={char-rnn},
year={2015},
howpublished={\url{https://github.com/karpathy/char-rnn}}
}
``` |
tilomichel/mT5-base-GermanQuAD-e2e-qg | 61f6573dde3826229f574d2aaf2ff4ea07d961cc | 2022-07-02T10:46:35.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"de",
"dataset:deepset/germanquad",
"arxiv:2010.11934",
"arxiv:2005.01107",
"transformers",
"question generation",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | tilomichel | null | tilomichel/mT5-base-GermanQuAD-e2e-qg | 87 | null | transformers | 4,864 | ---
license: mit
widget:
- text: "generate question: KMI ist eine Variante des allgemeinen Bachelors Informatik und damit zu ca. 80% identisch mit dem allgemeinen Bachelor Informatik, d.h. auch diese Variante ist ein Informatikstudium mit einem hohen Programmieranteil. Der Studienschwerpunkt adressiert insbesondere die heute geforderten Soft-Skills, die für ein Arbeiten im Team unerlässlich sind. Des Weiteren lernen Sie das Interaktionsdesign Ihrer Anwendungen kreativ zu optimieren und ihr Auge für eine gelungene Gestaltung zu schulen. In jedem Semester werden Akzente gesetzt: Im ersten und dritten Semester haben Sie beispielsweise ein Projekt anstelle eher technisch ausgerichteter Module. Die Hälfte Ihrer Wahlpflichtmodule absolvieren Sie am Fachbereich Media. </s>"
example_title: "Question generation 1"
- text: "generate question: SARS-CoV-2 zirkuliert weiterhin in der Bevölkerung und kann sich überall dort verbreiten, wo Menschen zusammenkommen. Auch wenn in den Sommermonaten die Fallzahlen saisonbedingt niedriger sind als in der kalten Jahreszeit, empfiehlt das RKI nach wie vor, die AHA+A+L-Regeln einzuhalten (Abstand halten, Hygieneregeln beachten, Alltag mit Maske, Coronawarnapp nutzen, Lüften), bei Atemwegssymptomen zu Hause zu bleiben und sich testen zu lassen, und auf einen vollständigen Impfschutz gegen COVID-19 zu achten. </s>"
example_title: "Question generation 2"
- text: "generate question: Ballaststoffe haben eine Reihe von Wirkungen auf den Körper, vor allem auf die Verdauung, z. B. Einfluss auf die Transitzeit der Nahrung in Magen und Darm, Masse und Konsistenz des Stuhls sowie Häufigkeit der Darmentleerung, Sättigungswirkung, veränderte Nährstoffabsorption und präbiotische Wirkung. Je nach Art der Ballaststoffe und nach Abschnitt im Verdauungstrakt kann es zu unterschiedlichen Effekten kommen. Bei der Fermentation von Ballaststoffen entstehen zudem verschiedene kurzkettige Fettsäuren, die dem Körper teilweise als Energiequelle zur Verfügung stehen. Schätzungsweise liefern die kurzkettigen Fettsäuren 8,4 kJ (2,0 kcal) pro g Ballaststoff. </s>"
example_title: "Question generation 3"
inference:
parameters:
max_length: 128
num_beams: 4
length_penalty: 1.5
no_repeat_ngram_size: 3
early_stopping: True
language:
- de
tags:
- question generation
datasets:
- deepset/germanquad
metrics:
- sacrebleu
- bleu
- rouge-l
- meteor
- bertscore
model-index:
- name: tilomichel/mT5-base-GermanQuAD-e2e-qg
results:
- task:
type: question-generation
name: Question generation
dataset:
type: xquad
name: XQuAD (de)
split: de
metrics:
- type: sacrebleu
value: 1.72837804716791
name: BLEU Score
args:
lowercase: true
verified: false
- type: sacrebleu
value: 49.210584834334
name: BLEU-1
args:
lowercase: true
verified: false
- type: sacrebleu
value: 16.960300681230915
name: BLEU-2
args:
lowercase: true
verified: false
- type: sacrebleu
value: 7.144635299975106
name: BLEU-3
args:
lowercase: true
verified: false
- type: sacrebleu
value: 3.230076780513635
name: BLEU-4
args:
lowercase: true
verified: false
- type: rouge
name: ROUGE-L (f-measure)
value: 0.171130005590873
args:
use_aggregator: true
use_stemmer: false
verified: false
- type: meteor
value: 0.0835049103331918
name: METEOR
args:
language: de
verified: false
- type: bertscore
value: 0.331940584507538
name: BERTScore (F1)
args:
rescale_with_baseline: true
verified: false
---
# mT5-base finetuned on the GermanQuAD dataset for answer-agnostic question generation
This model is a finetuned [mT5-base](https://arxiv.org/abs/2010.11934) model for the task of answer-agnostic (or end-to-end) question generation. The approach from [Lopez et al.](https://arxiv.org/abs/2005.01107) was used called *All questions per line (AQPL)*. This means a paragraph is provided as input and multiple questions are generated from it. Other models already tested this approach with the T5 model for [English](https://huggingface.co/valhalla/t5-base-e2e-qg) and [German](https://huggingface.co/dehio/german-qg-t5-e2e-quad).
For finetuning this model only used the [GermanQuAD dataset from deepset](https://www.deepset.ai/germanquad) was used. The dataset was modified and filtered with scripts that can be found in [another repository](https://github.com/TiloMichel/textgen-for-chatbot-training-german/tree/main/1_data_preparation_and_exploration).
## Training, test and evaluation data
For training and test the original split from GermanQuAD was used. As evaluation dataset the German split of the [XQuAD](https://github.com/deepmind/xquad) dataset was used.
## Training hyperparameters
The training parameters are provided in JSON and can be used with a training script provided in a [repository](https://github.com/TiloMichel/textgen-for-chatbot-training-german/tree/main/2_training)
```JS
{
"model_name_or_path": "google/mt5-base",
"output_dir": "mt5-base-germanquad-e2e-qg",
"overwrite_output_dir": true,
"cache_dir": "model-cache",
"dataset_dir": "e2e-qg-germanquad",
"preprocessing_num_workers": 20,
"max_source_length": 1024,
"max_target_length": 128,
"val_max_target_length": 128,
"pad_to_max_length": true,
"seed": 42,
"do_train": true,
"gradient_accumulation_steps": 64,
"per_device_train_batch_size": 1,
"per_device_eval_batch_size": 1,
"learning_rate": 1e-4,
"num_train_epochs": 10,
"evaluation_strategy": "epoch",
"logging_strategy": "epoch",
"save_strategy": "epoch",
"save_total_limit": 3,
"dataloader_num_workers": 8,
"ddp_find_unused_parameters": false
}
```
## Training results
The evaluation is reported on XQuAD. The implementations and configurations can be found in [another repository](https://github.com/TiloMichel/textgen-for-chatbot-training-german/tree/main/3_evaluation). |
natalierobbins/test_model | 6229c433e4c8e29be7d43a783685b0345cf76bca | 2022-07-19T23:29:56.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | natalierobbins | null | natalierobbins/test_model | 87 | null | transformers | 4,865 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: test_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0749
- Accuracy: 0.9720
- F1: 0.9698
- Precision: 0.9710
- Recall: 0.9720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0812 | 1.0 | 1315 | 0.0749 | 0.9720 | 0.9698 | 0.9710 | 0.9720 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
abelblue3/DialoGPT-medium-baymax | 7e5216ccdbc9de4e8e2810f1bb38fadbcb8376de | 2022-07-29T17:08:22.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | abelblue3 | null | abelblue3/DialoGPT-medium-baymax | 87 | null | transformers | 4,866 | ---
tags:
- conversational
---
# DialoGPT BaymaxBot |
AkshatSurolia/DeiT-FaceMask-Finetuned | 7a7f81c61c64c9d0821d0a5d63d54fe9427ebde6 | 2022-02-18T13:10:05.000Z | [
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0"
] | image-classification | false | AkshatSurolia | null | AkshatSurolia/DeiT-FaceMask-Finetuned | 86 | null | transformers | 4,867 | ---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
## Model description
This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
## Training Metrics
epoch = 2.0
total_flos = 2078245655GF
train_loss = 0.0438
train_runtime = 1:37:16.87
train_samples_per_second = 9.887
train_steps_per_second = 0.309
---
## Evaluation Metrics
epoch = 2.0
eval_accuracy = 0.9922
eval_loss = 0.0271
eval_runtime = 0:03:17.36
eval_samples_per_second = 18.22
eval_steps_per_second = 2.28 |
BigSalmon/SimplifyText | f1dcd3f3a42cf9f8515c8177e341c506e24dd9d2 | 2021-10-14T00:41:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/SimplifyText | 86 | null | transformers | 4,868 | - All credit goes to https://huggingface.co/philippelaban/keep_it_simple.
- This is a copy of their repository for future training purposes.
- It is supposed to simplify text.
- Their model card gives instructions on how to use it. |
Geotrend/bert-base-en-th-cased | 336fd71f6da084fd96bc5cec3b745712333ba686 | 2021-05-18T19:47:11.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-th-cased | 86 | null | transformers | 4,869 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-th-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-th-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-th-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Helsinki-NLP/opus-mt-de-vi | ec56a3fd05ff4fba8acaa11a0d434e889b088cdd | 2021-01-18T08:02:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-de-vi | 86 | null | transformers | 4,870 | ---
language:
- de
- vi
tags:
- translation
license: apache-2.0
---
### deu-vie
* source group: German
* target group: Vietnamese
* OPUS readme: [deu-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-vie/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.vie | 25.0 | 0.443 |
### System Info:
- hf_name: deu-vie
- source_languages: deu
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'vi']
- src_constituents: {'deu'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-vie/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: vie
- short_pair: de-vi
- chrF2_score: 0.44299999999999995
- bleu: 25.0
- brevity_penalty: 1.0
- ref_len: 3768.0
- src_name: German
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: vi
- prefer_old: False
- long_pair: deu-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-gmw-gmw | 14861fa7cfac10a6f99c09e30a18c23cc02bf1b1 | 2021-01-18T08:53:04.000Z | [
"pytorch",
"marian",
"text2text-generation",
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gmw-gmw | 86 | null | transformers | 4,871 | ---
language:
- nl
- en
- lb
- af
- de
- fy
- yi
- gmw
tags:
- translation
license: apache-2.0
---
### gmw-gmw
* source group: West Germanic languages
* target group: West Germanic languages
* OPUS readme: [gmw-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* model: transformer
* source language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 25.3 | 0.527 |
| newssyscomb2009-engdeu.eng.deu | 19.0 | 0.502 |
| news-test2008-deueng.deu.eng | 23.7 | 0.515 |
| news-test2008-engdeu.eng.deu | 19.2 | 0.491 |
| newstest2009-deueng.deu.eng | 23.1 | 0.514 |
| newstest2009-engdeu.eng.deu | 18.6 | 0.495 |
| newstest2010-deueng.deu.eng | 25.8 | 0.545 |
| newstest2010-engdeu.eng.deu | 20.3 | 0.505 |
| newstest2011-deueng.deu.eng | 23.7 | 0.523 |
| newstest2011-engdeu.eng.deu | 18.9 | 0.490 |
| newstest2012-deueng.deu.eng | 24.4 | 0.529 |
| newstest2012-engdeu.eng.deu | 19.2 | 0.489 |
| newstest2013-deueng.deu.eng | 27.2 | 0.545 |
| newstest2013-engdeu.eng.deu | 22.4 | 0.514 |
| newstest2014-deen-deueng.deu.eng | 27.0 | 0.546 |
| newstest2015-ende-deueng.deu.eng | 28.4 | 0.552 |
| newstest2015-ende-engdeu.eng.deu | 25.3 | 0.541 |
| newstest2016-ende-deueng.deu.eng | 33.2 | 0.595 |
| newstest2016-ende-engdeu.eng.deu | 29.8 | 0.578 |
| newstest2017-ende-deueng.deu.eng | 29.0 | 0.557 |
| newstest2017-ende-engdeu.eng.deu | 23.9 | 0.534 |
| newstest2018-ende-deueng.deu.eng | 35.9 | 0.607 |
| newstest2018-ende-engdeu.eng.deu | 34.8 | 0.609 |
| newstest2019-deen-deueng.deu.eng | 32.1 | 0.579 |
| newstest2019-ende-engdeu.eng.deu | 31.0 | 0.579 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.065 |
| Tatoeba-test.afr-deu.afr.deu | 46.8 | 0.668 |
| Tatoeba-test.afr-eng.afr.eng | 58.5 | 0.728 |
| Tatoeba-test.afr-enm.afr.enm | 13.4 | 0.357 |
| Tatoeba-test.afr-fry.afr.fry | 5.3 | 0.026 |
| Tatoeba-test.afr-gos.afr.gos | 3.5 | 0.228 |
| Tatoeba-test.afr-ltz.afr.ltz | 1.6 | 0.131 |
| Tatoeba-test.afr-nld.afr.nld | 55.4 | 0.715 |
| Tatoeba-test.afr-yid.afr.yid | 3.4 | 0.008 |
| Tatoeba-test.ang-afr.ang.afr | 3.1 | 0.096 |
| Tatoeba-test.ang-deu.ang.deu | 2.6 | 0.188 |
| Tatoeba-test.ang-eng.ang.eng | 5.4 | 0.211 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.197 |
| Tatoeba-test.ang-gos.ang.gos | 6.6 | 0.186 |
| Tatoeba-test.ang-ltz.ang.ltz | 5.3 | 0.072 |
| Tatoeba-test.ang-yid.ang.yid | 0.9 | 0.131 |
| Tatoeba-test.deu-afr.deu.afr | 52.7 | 0.699 |
| Tatoeba-test.deu-ang.deu.ang | 0.8 | 0.133 |
| Tatoeba-test.deu-eng.deu.eng | 43.5 | 0.621 |
| Tatoeba-test.deu-enm.deu.enm | 6.9 | 0.245 |
| Tatoeba-test.deu-frr.deu.frr | 0.8 | 0.200 |
| Tatoeba-test.deu-fry.deu.fry | 15.1 | 0.367 |
| Tatoeba-test.deu-gos.deu.gos | 2.2 | 0.279 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.0 | 0.176 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.6 | 0.208 |
| Tatoeba-test.deu-ltz.deu.ltz | 12.1 | 0.274 |
| Tatoeba-test.deu-nds.deu.nds | 18.8 | 0.446 |
| Tatoeba-test.deu-nld.deu.nld | 48.6 | 0.669 |
| Tatoeba-test.deu-pdc.deu.pdc | 4.6 | 0.198 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.340 |
| Tatoeba-test.deu-stq.deu.stq | 3.2 | 0.240 |
| Tatoeba-test.deu-swg.deu.swg | 0.5 | 0.179 |
| Tatoeba-test.deu-yid.deu.yid | 1.7 | 0.160 |
| Tatoeba-test.eng-afr.eng.afr | 55.8 | 0.730 |
| Tatoeba-test.eng-ang.eng.ang | 5.7 | 0.157 |
| Tatoeba-test.eng-deu.eng.deu | 36.7 | 0.584 |
| Tatoeba-test.eng-enm.eng.enm | 2.0 | 0.272 |
| Tatoeba-test.eng-frr.eng.frr | 6.1 | 0.246 |
| Tatoeba-test.eng-fry.eng.fry | 15.3 | 0.378 |
| Tatoeba-test.eng-gos.eng.gos | 1.2 | 0.242 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.164 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.9 | 0.170 |
| Tatoeba-test.eng-ltz.eng.ltz | 13.7 | 0.263 |
| Tatoeba-test.eng-nds.eng.nds | 17.1 | 0.410 |
| Tatoeba-test.eng-nld.eng.nld | 49.6 | 0.673 |
| Tatoeba-test.eng-pdc.eng.pdc | 5.1 | 0.218 |
| Tatoeba-test.eng-sco.eng.sco | 34.8 | 0.587 |
| Tatoeba-test.eng-stq.eng.stq | 2.1 | 0.322 |
| Tatoeba-test.eng-swg.eng.swg | 1.7 | 0.192 |
| Tatoeba-test.eng-yid.eng.yid | 1.7 | 0.173 |
| Tatoeba-test.enm-afr.enm.afr | 13.4 | 0.397 |
| Tatoeba-test.enm-ang.enm.ang | 0.7 | 0.063 |
| Tatoeba-test.enm-deu.enm.deu | 41.5 | 0.514 |
| Tatoeba-test.enm-eng.enm.eng | 21.3 | 0.483 |
| Tatoeba-test.enm-fry.enm.fry | 0.0 | 0.058 |
| Tatoeba-test.enm-gos.enm.gos | 10.7 | 0.354 |
| Tatoeba-test.enm-ksh.enm.ksh | 7.0 | 0.161 |
| Tatoeba-test.enm-nds.enm.nds | 18.6 | 0.316 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.524 |
| Tatoeba-test.enm-yid.enm.yid | 0.7 | 0.128 |
| Tatoeba-test.frr-deu.frr.deu | 4.1 | 0.219 |
| Tatoeba-test.frr-eng.frr.eng | 14.1 | 0.186 |
| Tatoeba-test.frr-fry.frr.fry | 3.1 | 0.129 |
| Tatoeba-test.frr-gos.frr.gos | 3.6 | 0.226 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.145 |
| Tatoeba-test.frr-nld.frr.nld | 9.8 | 0.209 |
| Tatoeba-test.frr-stq.frr.stq | 2.8 | 0.142 |
| Tatoeba-test.fry-afr.fry.afr | 0.0 | 1.000 |
| Tatoeba-test.fry-deu.fry.deu | 30.1 | 0.535 |
| Tatoeba-test.fry-eng.fry.eng | 28.0 | 0.486 |
| Tatoeba-test.fry-enm.fry.enm | 16.0 | 0.262 |
| Tatoeba-test.fry-frr.fry.frr | 5.5 | 0.160 |
| Tatoeba-test.fry-gos.fry.gos | 1.6 | 0.307 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.438 |
| Tatoeba-test.fry-nds.fry.nds | 8.1 | 0.083 |
| Tatoeba-test.fry-nld.fry.nld | 41.4 | 0.616 |
| Tatoeba-test.fry-stq.fry.stq | 1.6 | 0.217 |
| Tatoeba-test.fry-yid.fry.yid | 1.6 | 0.159 |
| Tatoeba-test.gos-afr.gos.afr | 6.3 | 0.318 |
| Tatoeba-test.gos-ang.gos.ang | 6.2 | 0.058 |
| Tatoeba-test.gos-deu.gos.deu | 11.7 | 0.363 |
| Tatoeba-test.gos-eng.gos.eng | 14.9 | 0.322 |
| Tatoeba-test.gos-enm.gos.enm | 9.1 | 0.398 |
| Tatoeba-test.gos-frr.gos.frr | 3.3 | 0.117 |
| Tatoeba-test.gos-fry.gos.fry | 13.1 | 0.387 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.1 | 0.154 |
| Tatoeba-test.gos-nds.gos.nds | 2.4 | 0.206 |
| Tatoeba-test.gos-nld.gos.nld | 13.9 | 0.395 |
| Tatoeba-test.gos-stq.gos.stq | 2.1 | 0.209 |
| Tatoeba-test.gos-yid.gos.yid | 1.7 | 0.147 |
| Tatoeba-test.gsw-deu.gsw.deu | 10.5 | 0.350 |
| Tatoeba-test.gsw-eng.gsw.eng | 10.7 | 0.299 |
| Tatoeba-test.ksh-deu.ksh.deu | 12.0 | 0.373 |
| Tatoeba-test.ksh-eng.ksh.eng | 3.2 | 0.225 |
| Tatoeba-test.ksh-enm.ksh.enm | 13.4 | 0.308 |
| Tatoeba-test.ltz-afr.ltz.afr | 37.4 | 0.525 |
| Tatoeba-test.ltz-ang.ltz.ang | 2.8 | 0.036 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.3 | 0.596 |
| Tatoeba-test.ltz-eng.ltz.eng | 31.7 | 0.490 |
| Tatoeba-test.ltz-fry.ltz.fry | 36.3 | 0.658 |
| Tatoeba-test.ltz-gos.ltz.gos | 2.9 | 0.209 |
| Tatoeba-test.ltz-nld.ltz.nld | 38.8 | 0.530 |
| Tatoeba-test.ltz-stq.ltz.stq | 5.8 | 0.165 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.0 | 0.159 |
| Tatoeba-test.multi.multi | 36.4 | 0.568 |
| Tatoeba-test.nds-deu.nds.deu | 35.0 | 0.573 |
| Tatoeba-test.nds-eng.nds.eng | 29.6 | 0.495 |
| Tatoeba-test.nds-enm.nds.enm | 3.7 | 0.194 |
| Tatoeba-test.nds-frr.nds.frr | 6.6 | 0.133 |
| Tatoeba-test.nds-fry.nds.fry | 4.2 | 0.087 |
| Tatoeba-test.nds-gos.nds.gos | 2.0 | 0.243 |
| Tatoeba-test.nds-nld.nds.nld | 41.4 | 0.618 |
| Tatoeba-test.nds-swg.nds.swg | 0.6 | 0.178 |
| Tatoeba-test.nds-yid.nds.yid | 8.3 | 0.238 |
| Tatoeba-test.nld-afr.nld.afr | 59.4 | 0.759 |
| Tatoeba-test.nld-deu.nld.deu | 49.9 | 0.685 |
| Tatoeba-test.nld-eng.nld.eng | 54.1 | 0.699 |
| Tatoeba-test.nld-enm.nld.enm | 5.0 | 0.250 |
| Tatoeba-test.nld-frr.nld.frr | 2.4 | 0.224 |
| Tatoeba-test.nld-fry.nld.fry | 19.4 | 0.446 |
| Tatoeba-test.nld-gos.nld.gos | 2.5 | 0.273 |
| Tatoeba-test.nld-ltz.nld.ltz | 13.8 | 0.292 |
| Tatoeba-test.nld-nds.nld.nds | 21.3 | 0.457 |
| Tatoeba-test.nld-sco.nld.sco | 14.7 | 0.423 |
| Tatoeba-test.nld-stq.nld.stq | 1.9 | 0.257 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.162 |
| Tatoeba-test.nld-yid.nld.yid | 2.6 | 0.186 |
| Tatoeba-test.pdc-deu.pdc.deu | 39.7 | 0.529 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.0 | 0.427 |
| Tatoeba-test.sco-deu.sco.deu | 28.4 | 0.428 |
| Tatoeba-test.sco-eng.sco.eng | 41.8 | 0.595 |
| Tatoeba-test.sco-nld.sco.nld | 36.4 | 0.565 |
| Tatoeba-test.stq-deu.stq.deu | 7.7 | 0.328 |
| Tatoeba-test.stq-eng.stq.eng | 21.1 | 0.428 |
| Tatoeba-test.stq-frr.stq.frr | 2.0 | 0.118 |
| Tatoeba-test.stq-fry.stq.fry | 6.3 | 0.255 |
| Tatoeba-test.stq-gos.stq.gos | 1.4 | 0.244 |
| Tatoeba-test.stq-ltz.stq.ltz | 4.4 | 0.204 |
| Tatoeba-test.stq-nld.stq.nld | 10.7 | 0.371 |
| Tatoeba-test.stq-yid.stq.yid | 1.4 | 0.105 |
| Tatoeba-test.swg-deu.swg.deu | 9.5 | 0.343 |
| Tatoeba-test.swg-eng.swg.eng | 15.1 | 0.306 |
| Tatoeba-test.swg-nds.swg.nds | 0.7 | 0.196 |
| Tatoeba-test.swg-nld.swg.nld | 11.6 | 0.308 |
| Tatoeba-test.swg-yid.swg.yid | 0.9 | 0.186 |
| Tatoeba-test.yid-afr.yid.afr | 100.0 | 1.000 |
| Tatoeba-test.yid-ang.yid.ang | 0.6 | 0.079 |
| Tatoeba-test.yid-deu.yid.deu | 16.7 | 0.372 |
| Tatoeba-test.yid-eng.yid.eng | 15.8 | 0.344 |
| Tatoeba-test.yid-enm.yid.enm | 1.3 | 0.166 |
| Tatoeba-test.yid-fry.yid.fry | 5.6 | 0.157 |
| Tatoeba-test.yid-gos.yid.gos | 2.2 | 0.160 |
| Tatoeba-test.yid-ltz.yid.ltz | 2.1 | 0.238 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.365 |
| Tatoeba-test.yid-nld.yid.nld | 20.9 | 0.397 |
| Tatoeba-test.yid-stq.yid.stq | 3.7 | 0.165 |
| Tatoeba-test.yid-swg.yid.swg | 1.8 | 0.156 |
### System Info:
- hf_name: gmw-gmw
- source_languages: gmw
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt
- src_alpha3: gmw
- tgt_alpha3: gmw
- short_pair: gmw-gmw
- chrF2_score: 0.568
- bleu: 36.4
- brevity_penalty: 1.0
- ref_len: 72534.0
- src_name: West Germanic languages
- tgt_name: West Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmw
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: gmw-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ja-tr | ac9ece2f2c4dd604a6d69c927bef045f09295e5d | 2020-08-21T14:42:47.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ja",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ja-tr | 86 | null | transformers | 4,872 | ---
language:
- ja
- tr
tags:
- translation
license: apache-2.0
---
### jpn-tur
* source group: Japanese
* target group: Turkish
* OPUS readme: [jpn-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md)
* model: transformer-align
* source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Yiii
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.jpn.tur | 16.7 | 0.434 |
### System Info:
- hf_name: jpn-tur
- source_languages: jpn
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ja', 'tr']
- src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt
- src_alpha3: jpn
- tgt_alpha3: tur
- short_pair: ja-tr
- chrF2_score: 0.434
- bleu: 16.7
- brevity_penalty: 0.932
- ref_len: 4755.0
- src_name: Japanese
- tgt_name: Turkish
- train_date: 2020-06-17
- src_alpha2: ja
- tgt_alpha2: tr
- prefer_old: False
- long_pair: jpn-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
NYTK/summarization-hi-bart-base-1024-hungarian | 89a30ed7fa730cd8068cf4931de9fd4b41b21334 | 2022-02-14T13:27:40.000Z | [
"pytorch",
"bart",
"text2text-generation",
"hu",
"transformers",
"summarization",
"license:gpl",
"autotrain_compatible"
] | summarization | false | NYTK | null | NYTK/summarization-hi-bart-base-1024-hungarian | 86 | null | transformers | 4,873 | ---
language:
- hu
tags:
- summarization
license: gpl
metrics:
- rouge
widget:
- text: "A Tisza-parti város állatkertjében régóta tartanak szurikátákat ( Suricata suricatta ) , de tavaly tavaszig nem sikerült szaporítani őket , annak ellenére , hogy tágas ház és kifutó épült számukra - közölte Veprik Róbert igazgató . 2010-ben alakult ki az új - három Amszterdamból származó nőstényből és egy budapesti fiatal hímből álló - csapat , amely szaporodni kezdett . 2011-ben három , idén pedig egy utóddal örvendeztették meg a gondozókat és az állatbarátokat . A szurikáták utódai - tizenegy hetes vemhesség után - október és március között vakon és szőrtelenül jönnek a világra . A kicsinyek háromhetesen bújnak elő az üregből , és nevelésükben mindkét szülő részt vesz . A szurikátacsapatokban a család tagjai nagyon szoros kapcsolatban állnak egymással , viszont nagyon harciasan fellépnek az idegenekkel szemben , akár meg is ölhetik azt az állatot , amelyet betolakodónak tekintenek . Bár a Dél-Afrikában , a Kalahári sivatagban őshonos cibetmacskaféle ragadozókat a szegedi állatkertben természetes élőhelyükhöz képest kevesebb veszély fenyegeti , a vadasparki erdőben ragadozó madarak is élnek , amelyek akár zsákmányként is tekinthetnének a szurikátákra . A szegedi csapatnál azonban szigorú őrség van , mindig lesi valaki két lábra állva a veszélyforrásokat . Az őrszemek figyelmét még a sárkányrepülők is felkeltik , és felbukkanásakor valamennyi egyed biztos helyre menekül . A szurikáták a Kalahári sivatag bozótos , sziklás területein csapatokban élnek . A 700 gramm körüli testtömegű ragadozók rovarokkal , lárvákkal , skorpiókkal táplálkoznak , de néha elfogyasztják a kisebb gerinceseket , tojásokat és növényi gumókat is . A nappal aktív állatok földalatti üregrendszert ásnak , amelynek több bejárata is van . Ha a szurikáták idegen csapattal vagy ragadozóval kerülnek szembe , azonnal elkezdenek ásni , nagy porfelhőt kavarva . Az is gyakorta előfordul , hogy szorosan egymáshoz bújnak , felborzolják szőrüket , megnyújtják testüket , hogy minél nagyobbnak látszódjanak . Az előadásuk csúcspontján pedig az egész csapat a levegőbe ugrik , közben pedig morog . A hangadás egyébként is fontos a szurikáták kapcsolatában , az egyedek legalább tízféle jelzést használnak a kolónián belül ."
---
# Hungarian Abstractive Summarization BART model
For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- BART base model (see Results Table - bold):
- Pretrained on Webcorpus 2.0
- Finetuned HI corpus (hvg.hu + index.hu)
- Segments: 559.162
## Limitations
- tokenized input text (tokenizer: [HuSpaCy](https://huggingface.co/huspacy))
- **max_source_length = 1024**
- max_target_length = 256
## Results
| Model | HI | NOL |
| ------------- | ------------- | ------------- |
| BART-base-512 | 30.18/13.86/22.92 | 46.48/32.40/39.45 |
| BART-base-1024| **31.86/14.59/23.79** | 47.01/32.91/39.97 |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {{BARTerezzünk! - Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}},
booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia},
year = {2022},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {{Yang Zijian Győző}},
pages = {15--29}
}
``` |
benjamin/gpt2-wechsel-chinese | feee72e42c2b685b2db8905223633ae6ce92f20f | 2022-07-13T23:43:53.000Z | [
"pytorch",
"gpt2",
"text-generation",
"zh",
"transformers",
"license:mit"
] | text-generation | false | benjamin | null | benjamin/gpt2-wechsel-chinese | 86 | null | transformers | 4,874 | ---
language: zh
license: mit
---
# gpt2-wechsel-chinese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
nates-test-org/convit_small | 0d0cc307c82f89aa6268ef668e056b1ba00c2fc1 | 2021-10-29T04:45:04.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/convit_small | 86 | null | timm | 4,875 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for convit_small |
persiannlp/mt5-base-parsinlu-squad-reading-comprehension | 626121fa4a6a18daa743231abe29b3419c03cd61 | 2021-09-23T16:20:07.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:squad",
"transformers",
"reading-comprehension",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-base-parsinlu-squad-reading-comprehension | 86 | 1 | transformers | 4,876 |
---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- reading-comprehension
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- squad
metrics:
- f1
---
# Reading Comprehension (مدل برای پاسخ به درک مطلب)
This is a mT5-based model for reading comprehension.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "base"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-squad-reading-comprehension"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(paragraph, question, **generator_args):
input_ids = tokenizer.encode(question + "\n" + paragraph, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model(
"یک شی را دارای تقارن مینامیم زمانی که ان شی را بتوان به دو یا چند قسمت تقسیم کرد که آنها قسمتی از یک طرح سازمان یافته باشند یعنی بر روی شکل تنها جابجایی و چرخش و بازتاب و تجانس انجام شود و در اصل شکل تغییری به وجود نیایید آنگاه ان را تقارن مینامیم مرکز تقارن:اگر در یک شکل نقطهای مانندA وجود داشته باشد که هر نقطهٔ روی شکل (محیط) نسبت به نقطه یAمتقارن یک نقطهٔ دیگر شکل (محیط) باشد، نقطهٔ Aمرکز تقارن است. یعنی هر نقطه روی شکل باید متقارنی داشته باشد شکلهای که منتظم هستند و زوج ضلع دارند دارای مرکز تقارند ولی شکلهای فرد ضلعی منتظم مرکز تقارن ندارند. متوازیالأضلاع و دایره یک مرکز تقارن دارند ممکن است یک شکل خط تقارن نداشته باشد ولی مرکز تقارن داشته باشد. (منبع:س. گ)",
"اشکالی که یک مرکز تقارن دارند"
)
run_model(
"شُتُر یا اُشتر را که در زبان پهلوی (ushtar)[نیازمند منبع] میگفتند حیوانی است نیرومند و تنومند با توش و توان بالا از خانواده شتران؛ شبه نشخوارکننده و با دست و گردنی دراز. بر پشت خود یک یا دو کوهان دارد که ساختارش از پیه و چربی است. در دین اسلام گوشت او حلال است. اما ذبح آن با دیگر جانوران حلال گوشت متفاوت است و آن را نحر (بریدن گلو) میکنند و اگر سر آن را مانند گوسفند پیش از نحر ببرند گوشت آن حلال نیست. شیرش نیز نوشیده میشود ولی بیشتر کاربرد بارکشی دارد. پشم و پوستش نیز برای ریسندگی و پارچهبافی و کفشدوزی کاربرد دارد. گونههای دیگری از شتران نیز در آمریکای جنوبی زندگی میکنند، به نامهای لاما، آلپاکا، گواناکو که دارای کوهان نیستند. شتر ویژگیهای خاصّی دارد که مهمترین آنها تحمّل شرایط سخت صحرا و دماهای گوناگون و بهویژه گرمای شدید تابستان و کمبود آب و علوفه است. ترکیب جسمانی شتر با دیگر جانوران اختلاف زیادی دارد، و این اختلاف انگیزه شده که شتر در درازا روزهای سال در بیابان زندگی کند و از بوتهها و درختچههای گوناگون صحرایی و کویری و حتی از بوتههای شور و خاردار تغذیه کند. عربها از زمانهای بسیار دور از شتر استفاده کرده و میکنند. آنها به این حیوان اهلی لقب کشتی صحرا (به عربی: سفینةالصحراء) دادهاند.",
"غذای شترچیست؟"
)
run_model(
"""حسین میرزایی میگوید مرحله اول پرداخت وام حمایتی کرونا به همگی خانوارهای یارانهبگیر متقاضی تکمیل شده است و حال چهار میلیون خانوار که به عنوان "اقشار خاص" و "آسیبپذیر" شناسایی شدند، میتوانند برای یک میلیون تومان وام دیگر درخواست بدهند. آقای میرزایی گفته خانوارهای "آسیبپذیر" که شرایط گرفتن وام یک میلیونی اضافی را دارند با پیامک از این امکان مطلع شدهاند. بنا به گزارشهای رسمی با شیوع کرونا در ایران یک میلیون نفر بیکار شدهاند و درآمد کارکنان مشاغل غیررسمی نیز ضربه قابل توجهی خورده است. ارزش ریال هم در هفتههای اخیر در برابر ارزهای خارجی سقوط کرده است. اقتصاد ایران پیش از شیوع کرونا نیز با مشکلات مزمن رکود، تورم، تحریم و فساد روبرو بود.""",
"وام یارانه به چه کسانی میدهند؟"
)
run_model(
"در ۲۲ ژوئن ۱۹۴۱ نیروهای محور در عملیات بارباروسا حمله سنگینی به اتحاد شوروی کرده و یکی از بزرگترین نبردهای زمینی تاریخ بشر را رقم زدند. همچنین جبهه شرقی باعث به دام افتادن نیروهای محور شد و بیش از همه ارتش آلمان نازی را درگیر جنگ فرسایشی کرد. در دسامبر ۱۹۴۱ ژاپن یک در عملیاتی ناگهانی با نام نبرد پرل هاربر به پایگاه دریایی ایالات متحده آمریکا حمله کرد. به دنبال این اتفاق آمریکا نیز بلافاصله علیه ژاپن اعلان جنگ کرد که با حمایت بریتانیا همراه شد. پس از آن متحدین (نیروهای محور در اروپا) نیز با اتحاد ژاپن علیه آمریکا اعلام جنگ کردند. دستآوردهای ژاپن در یورش به آمریکا باعث ایجاد این احساس در آسیا شد که آسیا از تسلط غرب خارج شدهاست از این رو بسیاری از ارتشهای شکست خورده با آنها همراهی کردند.",
"چرا امریکا وارد جنگ جهانی دوم شد؟"
)
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/ |
projecte-aina/roberta-base-ca-cased-ner | aa44b621c767260cafa37e060f4f8a251f619936 | 2022-06-15T07:55:56.000Z | [
"pytorch",
"roberta",
"token-classification",
"ca",
"dataset:projecte-aina/ancora-ca-ner",
"arxiv:1907.11692",
"transformers",
"catalan",
"named entity recognition",
"ner",
"CaText",
"Catalan Textual Corpus",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | projecte-aina | null | projecte-aina/roberta-base-ca-cased-ner | 86 | 1 | transformers | 4,877 | ---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "named entity recognition"
- "ner"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/ancora-ca-ner"
metrics:
- f1
model-index:
- name: roberta-base-ca-cased-ner
results:
- task:
type: token-classification
dataset:
type: projecte-aina/ancora-ca-ner
name: ancora-ca-ner
metrics:
- type: f1
value: 0.8813
widget:
- text: "Em dic Lluïsa i visc a Santa Maria del Camí."
- text: "L'Aina, la Berta i la Norma són molt amigues."
- text: "El Martí llegeix el Cavall Fort."
---
# Catalan BERTa (RoBERTa-base) finetuned for Named Entity Recognition.
The **roberta-base-ca-cased-ner** is a Named Entity Recognition (NER) model for the Catalan language fine-tuned from the [BERTa](https://huggingface.co/PlanTL-GOB-ES/roberta-base-ca) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
## Datasets
We used the NER dataset in Catalan called [Ancora-ca-ner](https://huggingface.co/datasets/projecte-aina/ancora-ca-ner) for training and evaluation.
## Evaluation and results
We evaluated the _roberta-base-ca-cased-ner_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | Ancora-ca-ner (F1)|
| ------------|:-------------|
| roberta-base-ca-cased-ner | **88.13** |
| mBERT | 86.38 |
| XLM-RoBERTa | 87.66 |
| WikiBERT-ca | 77.66 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Citing
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
|
remi/bertabs-finetuned-xsum-extractive-abstractive-summarization | a184879f346b3a77c35e9390c7e4a660cb2ef6e3 | 2021-05-20T04:17:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | remi | null | remi/bertabs-finetuned-xsum-extractive-abstractive-summarization | 86 | null | transformers | 4,878 | Entry not found |
nickmuchi/vit-finetuned-chest-xray-pneumonia | 2086fb41a4a93b6c7bf701ae8d068157591c8afb | 2022-03-09T12:50:04.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:chest X-rays",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | nickmuchi | null | nickmuchi/vit-finetuned-chest-xray-pneumonia | 86 | null | transformers | 4,879 | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
datasets:
- chest X-rays
widget:
- src: https://drive.google.com/uc?id=1ygVCyEn6mfsNwpT1ZvWxANg5_DvStA7M
example_title: PNEUMONIA
- src: https://drive.google.com/uc?id=1xjcIEDb8kuSd4wF44gCEgsc0PfRvs53m
example_title: NORMAL
model-index:
- name: vit-finetuned-chest-xray-pneumonia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-finetuned-chest-xray-pneumonia
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [chest-xray-pneumonia](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Accuracy: 0.9551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 326 | 0.2739 | 0.9167 |
| 0.2238 | 2.0 | 652 | 0.2892 | 0.9071 |
| 0.2238 | 3.0 | 978 | 0.2077 | 0.9407 |
| 0.1385 | 4.0 | 1304 | 0.1349 | 0.9535 |
| 0.1347 | 5.0 | 1630 | 0.1271 | 0.9551 |
| 0.1347 | 6.0 | 1956 | 0.1458 | 0.9535 |
| 0.1112 | 7.0 | 2282 | 0.2040 | 0.9375 |
| 0.1063 | 8.0 | 2608 | 0.1423 | 0.9567 |
| 0.1063 | 9.0 | 2934 | 0.1473 | 0.9535 |
| 0.0944 | 10.0 | 3260 | 0.1385 | 0.9583 |
## Example Images
#### Pneumonia Chest X-Ray

#### Normal Chest X-Ray

### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Alvenir/wav2vec2-base-da-ft-nst | f8e0c5370b6db09ff54eb8a15ed642f58eaae55f | 2022-03-17T16:16:12.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"transformers",
"speech-to-text",
"license:apache-2.0"
] | automatic-speech-recognition | false | Alvenir | null | Alvenir/wav2vec2-base-da-ft-nst | 86 | 3 | transformers | 4,880 | ---
language: da
tags:
- speech-to-text
license: apache-2.0
---
# wav2vec2-base-da-ft-nst
This the [alvenir wav2vec2 model](https://huggingface.co/Alvenir/wav2vec2-base-da) for Danish ASR finetuned by Alvenir on the public NST dataset. The model is trained on 16kHz, so make sure your data is the same sample rate.
The model was trained using fairseq and then converted to huggingface/transformers format.
Alvenir is always happy to help with your own open-source ASR projects, customized domain specializations or premium models. ;-)
## Usage
```Python
import soundfile as sf
import torch
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2Tokenizer, Wav2Vec2Processor, \
Wav2Vec2ForCTC
def get_tokenizer(model_path: str) -> Wav2Vec2CTCTokenizer:
return Wav2Vec2Tokenizer.from_pretrained(model_path)
def get_processor(model_path: str) -> Wav2Vec2Processor:
return Wav2Vec2Processor.from_pretrained(model_path)
def load_model(model_path: str) -> Wav2Vec2ForCTC:
return Wav2Vec2ForCTC.from_pretrained(model_path)
model_id = "Alvenir/wav2vec2-base-da-ft-nst"
model = load_model(model_id)
model.eval()
tokenizer = get_tokenizer(model_id)
processor = get_processor(model_id)
audio_file = "<path/to/audio.wav>"
audio, _ = sf.read(audio_file)
input_values = processor(audio, return_tensors="pt", padding="longest", sampling_rate=16_000).input_values
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print(transcription)
```
## Benchmark results
This is some benchmark results on the public available datasets in Danish.
| Dataset | WER Greedy | WER with 3-gram Language Model |
|---------------------|------------|--------------------|
| NST test | 15,8% | 11.9% |
| alvenir-asr-da-eval | 19.0% | 12.1% |
| common_voice_80 da test | 26,3% | 19,2% |
|
ai4bharat/MultiIndicParaphraseGeneration | 3f5c5a06fa624a6267d93df3e8332197cc5cb6f5 | 2022-03-31T06:21:30.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicParaphrase",
"arxiv:2203.05437",
"transformers",
"paraphrase-generation",
"multilingual",
"nlp",
"indicnlp",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | ai4bharat | null | ai4bharat/MultiIndicParaphraseGeneration | 86 | null | transformers | 4,881 | ---
tags:
- paraphrase-generation
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicParaphrase
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- mit
---
# MultiIndicParaphraseGeneration
This repository contains the [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint finetuned on the 11 languages of [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset. For finetuning details,
see the [paper](https://arxiv.org/abs/2203.05437).
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li>
<li> Trained on large Indic language corpora (5.53 million sentences). </li>
<li> All languages, have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("दिल्ली यूनिवर्सिटी देश की प्रसिद्ध यूनिवर्सिटी में से एक है. </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # दिल्ली विश्वविद्यालय देश की प्रमुख विश्वविद्यालयों में शामिल है।
# Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
```
# Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
## Benchmarks
Scores on the `IndicParaphrase` test sets are as follows:
Language | BLEU / Self-BLEU / iBLEU
---------|----------------------------
as | 1.66 / 2.06 / 0.54
bn | 11.57 / 1.69 / 7.59
gu | 22.10 / 2.76 / 14.64
hi | 27.29 / 2.87 / 18.24
kn | 15.40 / 2.98 / 9.89
ml | 10.57 / 1.70 / 6.89
mr | 20.38 / 2.20 / 13.61
or | 19.26 / 2.10 / 12.85
pa | 14.87 / 1.35 / 10.00
ta | 18.52 / 2.88 / 12.10
te | 16.70 / 3.34 / 10.69
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
efederici/sentence-BERTino | 871f99951d3375f75a9cf7dc147cb1e0fac0170f | 2022-05-03T13:14:23.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"it",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | efederici | null | efederici/sentence-BERTino | 86 | 1 | sentence-transformers | 4,882 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-BERTino
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)) and tags/news-article pairs (via scraping).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-BERTino')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-BERTino')
model = AutoModel.from_pretrained('efederici/sentence-BERTino')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
``` |
NYTK/named-entity-recognition-nerkor-hubert-hungarian | a52a08bcc8abe9447ccc860f648b34500c130338 | 2022-04-01T09:03:27.000Z | [
"pytorch",
"bert",
"token-classification",
"hu",
"transformers",
"license:gpl",
"autotrain_compatible"
] | token-classification | false | NYTK | null | NYTK/named-entity-recognition-nerkor-hubert-hungarian | 86 | null | transformers | 4,883 | ---
language:
- hu
tags:
- token-classification
license: gpl
metrics:
- f1
widget:
- text: "A Kovácsné Nagy Erzsébet nagyon jól érzi magát a Nokiánál, azonban a Németországból érkezett Kovács Péter nehezen boldogul a beilleszkedéssel."
---
# Hungarian Sentence-level Sentiment Analysis model with XLM-RoBERTa
For further models, scripts and details, see [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: SZTAKI-HLT/hubert-base-cc
- Finetuned on [NYTK-NerKor](https://github.com/nytud/NYTK-NerKor)
- NE categories are: PER, LOC, MISC, ORG
## Limitations
- max_seq_length = 128
## Results
F-score: **90.18%**
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-language-models,
title = {Training language models with low resources: RoBERTa, BART and ELECTRA experimental models for Hungarian},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Yang, Zijian Győző and Váradi, Tamás}},
pages = {279--285}
}
``` |
IDEA-CCNL/Erlangshen-Roberta-330M-NLI | f42d4a45e9dc3933398be40431315edbd4e19c21 | 2022-05-12T09:49:11.000Z | [
"pytorch",
"bert",
"text-classification",
"zh",
"transformers",
"NLU",
"NLI",
"license:apache-2.0"
] | text-classification | false | IDEA-CCNL | null | IDEA-CCNL/Erlangshen-Roberta-330M-NLI | 86 | null | transformers | 4,884 | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- NLI
inference: true
widget:
- text: "今天心情不好[SEP]今天很开心"
---
# Erlangshen-Roberta-330M-NLI, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
We collect 4 NLI(Natural Language Inference) datasets in the Chinese domain for finetune, with a total of 1014787 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large)
## Usage
```python
from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-NLI')
model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-Roberta-330M-NLI')
texta='今天的饭不好吃'
textb='今天心情不好'
output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
```
## Scores on downstream chinese tasks (without any data augmentation)
| Model | cmnli | ocnli | snli |
| :--------: | :-----: | :----: | :-----: |
| Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 |
| Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88 |
| Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 |
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
mary905el/ruT5_neuro_chgk_answering | 349381b078e48288d6b82e26d5cd5c68c5c369bb | 2022-04-27T05:33:03.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"transformers",
"PyTorch",
"Transformers",
"autotrain_compatible"
] | text2text-generation | false | mary905el | null | mary905el/ruT5_neuro_chgk_answering | 86 | null | transformers | 4,885 | ---
language:
- ru
tags:
- PyTorch
- Transformers
widget:
- text: "Ответьте двумя словами, что мы заменили на ИКС?"
inference:
parameters:
do_sample: True
temperature: 0.8
---
This is https://huggingface.co/sberbank-ai/ruT5-base model, fine-tuned to answer ChGK questions (Что? Где? Когда? https://db.chgk.info/)
Dataset: 75 000 questions from 2000-2019
Trained for 10 epochs |
smc/Electric_Pole_with_or_without_transformer | dc05ec14a729ac37195e2d0361cf279727d5f862 | 2022-05-21T22:09:33.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
] | image-classification | false | smc | null | smc/Electric_Pole_with_or_without_transformer | 86 | 1 | transformers | 4,886 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Electric_Pole_with_or_without_transformer
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9230769276618958
---
# Electric_Pole_with_or_without_transformer
Classify your electric pole images |
qalover/chinese-pert-large-open-domain-mrc | e9cfdc887fe9d6cb043fb9b870e1908067e31916 | 2022-05-31T14:21:54.000Z | [
"pytorch",
"bert",
"question-answering",
"zh",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | question-answering | false | qalover | null | qalover/chinese-pert-large-open-domain-mrc | 86 | 1 | transformers | 4,887 | ---
language:
- zh
license: gpl-3.0
---
## 基于 chinese-pert-large 训练的面向开放领域MRC 模型
使用中文MRC数据(cmrc2018, webqa与laisi的训练集)训练的chinese-pert-large模型
## 训练过程
使用了[UER-py](https://github.com/dbiir/UER-py/) 进行fine-tuned
加入了包括但不限于摘要、负采样、混淆等数据加强方法
并转换为Huggingface进行上传
| | CMRC 2018 Dev | DRCD Dev | SQuAD-Zen Dev (Answerable) | AVG |
| :-------: | :-----------: | :-------: | :------------------------: | :-------: |
| PERT-large | 74.4/89.8 | 90.3/94.| 62.8/78.8 | 75.9/87.8 |
|
renjithks/layoutlmv2-er-ner | 2edce8c45d8a8307630be7e53eff3d362a74bb6e | 2022-06-08T19:37:51.000Z | [
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | renjithks | null | renjithks/layoutlmv2-er-ner | 86 | null | transformers | 4,888 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv2-er-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-er-ner
This model is a fine-tuned version of [renjithks/layoutlmv2-cord-ner](https://huggingface.co/renjithks/layoutlmv2-cord-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1217
- Precision: 0.7810
- Recall: 0.8085
- F1: 0.7945
- Accuracy: 0.9747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 41 | 0.5441 | 0.0 | 0.0 | 0.0 | 0.8851 |
| No log | 2.0 | 82 | 0.4660 | 0.1019 | 0.0732 | 0.0852 | 0.8690 |
| No log | 3.0 | 123 | 0.2506 | 0.4404 | 0.4828 | 0.4606 | 0.9240 |
| No log | 4.0 | 164 | 0.1725 | 0.6120 | 0.6076 | 0.6098 | 0.9529 |
| No log | 5.0 | 205 | 0.1387 | 0.7204 | 0.7245 | 0.7225 | 0.9671 |
| No log | 6.0 | 246 | 0.1237 | 0.7742 | 0.7747 | 0.7745 | 0.9722 |
| No log | 7.0 | 287 | 0.1231 | 0.7619 | 0.7554 | 0.7586 | 0.9697 |
| No log | 8.0 | 328 | 0.1199 | 0.7994 | 0.7719 | 0.7854 | 0.9738 |
| No log | 9.0 | 369 | 0.1197 | 0.7937 | 0.8113 | 0.8024 | 0.9741 |
| No log | 10.0 | 410 | 0.1284 | 0.7581 | 0.7597 | 0.7589 | 0.9690 |
| No log | 11.0 | 451 | 0.1172 | 0.7792 | 0.7848 | 0.7820 | 0.9738 |
| No log | 12.0 | 492 | 0.1192 | 0.7913 | 0.7970 | 0.7941 | 0.9743 |
| 0.1858 | 13.0 | 533 | 0.1175 | 0.7960 | 0.8006 | 0.7983 | 0.9753 |
| 0.1858 | 14.0 | 574 | 0.1184 | 0.7724 | 0.8034 | 0.7876 | 0.9740 |
| 0.1858 | 15.0 | 615 | 0.1171 | 0.7882 | 0.8142 | 0.8010 | 0.9756 |
| 0.1858 | 16.0 | 656 | 0.1195 | 0.7829 | 0.8070 | 0.7948 | 0.9745 |
| 0.1858 | 17.0 | 697 | 0.1209 | 0.7810 | 0.8006 | 0.7906 | 0.9743 |
| 0.1858 | 18.0 | 738 | 0.1241 | 0.7806 | 0.7963 | 0.7884 | 0.9740 |
| 0.1858 | 19.0 | 779 | 0.1222 | 0.7755 | 0.8027 | 0.7889 | 0.9742 |
| 0.1858 | 20.0 | 820 | 0.1217 | 0.7810 | 0.8085 | 0.7945 | 0.9747 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
fusing/latent-diffusion-text2im-large | 030d95399e64c1ff7be798e62b9bc6ef6ec7bf3b | 2022-07-18T13:31:27.000Z | [
"pytorch",
"ldmbert",
"arxiv:2112.10752",
"transformers",
"diffusion",
"license:mit"
] | null | false | fusing | null | fusing/latent-diffusion-text2im-large | 86 | 2 | transformers | 4,889 | ---
tags:
- diffusion
license: mit
---
Latent Diffusion
**Paper**: [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)
**Abstract**:
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at this https URL.
## Usage
```python
from diffusers import DiffusionPipeline
ldm = DiffusionPipeline.from_pretrained("fusing/latent-diffusion-text2im-large")
generator = torch.manual_seed(42)
prompt = "A painting of a squirrel eating a burger"
image = ldm([prompt], generator=generator, eta=0.3, guidance_scale=6.0, num_inference_steps=50)
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = image_processed * 255.
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. "A street sign that reads Huggingface."

2."A painting of a squirrel eating a burger"
 |
tornqvistmax/7cats_finetuned | f747e28016a4aacf52435f263855c669bcd421d1 | 2022-06-16T14:43:45.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | tornqvistmax | null | tornqvistmax/7cats_finetuned | 86 | null | transformers | 4,890 | Entry not found |
Vlasta/DNADebertaK7 | ba8d72bc65ce662e14e0ecf36a815ed003beb5f5 | 2022-07-05T23:37:40.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Vlasta | null | Vlasta/DNADebertaK7 | 86 | null | transformers | 4,891 | Entry not found |
Chrode/bert_prot_temp_classifier | e86766a768e54af94880b3955b93b6bc08bd624c | 2022-07-04T07:54:58.000Z | [
"pytorch",
"BertTempProtClassifier",
"transformers"
] | null | false | Chrode | null | Chrode/bert_prot_temp_classifier | 86 | null | transformers | 4,892 | Entry not found |
darragh/swinunetr-btcv-tiny | ce084bb73ca3af65aa33b79a8d00eab93550122e | 2022-07-15T21:01:18.000Z | [
"pytorch",
"en",
"dataset:BTCV",
"transformers",
"btcv",
"medical",
"swin",
"license:apache-2.0"
] | null | false | darragh | null | darragh/swinunetr-btcv-tiny | 86 | null | transformers | 4,893 | ---
language: en
tags:
- btcv
- medical
- swin
license: apache-2.0
datasets:
- BTCV
---
# Model Overview
This repository contains the code for Swin UNETR [1,2]. Swin UNETR is the state-of-the-art on Medical Segmentation
Decathlon (MSD) and Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset. In [1], a novel methodology is devised for pre-training Swin UNETR backbone in a self-supervised
manner. We provide the option for training Swin UNETR by fine-tuning from pre-trained self-supervised weights or from scratch.
The source repository for the training of these models can be found [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV).
# Installing Dependencies
Dependencies for training and inference can be installed using the model requirements :
``` bash
pip install -r requirements.txt
```
# Intended uses & limitations
You can use the raw model for dicom segmentation, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks which segment CAT scans or MRIs on images in dicom format. Dicom meta data mostly differs across medical facilities, so if applying to a new dataset, the model should be finetuned.
# How to use
To install necessary dependencies, run the below in bash.
```
git clone https://github.com/darraghdog/Project-MONAI-research-contributions pmrc
pip install -r pmrc/requirements.txt
cd pmrc/SwinUNETR/BTCV
```
To load the model from the hub.
```
>>> from swinunetr import SwinUnetrModelForInference
>>> model = SwinUnetrModelForInference.from_pretrained('darragh/swinunetr-btcv-tiny')
```
# Limitations and bias
The training data used for this model is specific to CAT scans from certain health facilities and machines. Data from other facilities may difffer in image distributions, and may require finetuning of the models for best performance.
# Evaluation results
We provide several pre-trained models on BTCV dataset in the following.
<table>
<tr>
<th>Name</th>
<th>Dice (overlap=0.7)</th>
<th>Dice (overlap=0.5)</th>
<th>Feature Size</th>
<th># params (M)</th>
<th>Self-Supervised Pre-trained </th>
</tr>
<tr>
<td>Swin UNETR/Base</td>
<td>82.25</td>
<td>81.86</td>
<td>48</td>
<td>62.1</td>
<td>Yes</td>
</tr>
<tr>
<td>Swin UNETR/Small</td>
<td>79.79</td>
<td>79.34</td>
<td>24</td>
<td>15.7</td>
<td>No</td>
</tr>
<tr>
<td>Swin UNETR/Tiny</td>
<td>72.05</td>
<td>70.35</td>
<td>12</td>
<td>4.0</td>
<td>No</td>
</tr>
</table>
# Data Preparation

The training data is from the [BTCV challenge dataset](https://www.synapse.org/#!Synapse:syn3193805/wiki/217752).
- Target: 13 abdominal organs including 1. Spleen 2. Right Kidney 3. Left Kideny 4.Gallbladder 5.Esophagus 6. Liver 7. Stomach 8.Aorta 9. IVC 10. Portal and Splenic Veins 11. Pancreas 12.Right adrenal gland 13.Left adrenal gland.
- Task: Segmentation
- Modality: CT
- Size: 30 3D volumes (24 Training + 6 Testing)
# Training
See the source repository [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV) for information on training.
# BibTeX entry and citation info
If you find this repository useful, please consider citing the following papers:
```
@inproceedings{tang2022self,
title={Self-supervised pre-training of swin transformers for 3d medical image analysis},
author={Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20730--20740},
year={2022}
}
@article{hatamizadeh2022swin,
title={Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images},
author={Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger and Xu, Daguang},
journal={arXiv preprint arXiv:2201.01266},
year={2022}
}
```
# References
[1]: Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V. and Hatamizadeh, A., 2022. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20730-20740).
[2]: Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.
|
Helsinki-NLP/opus-mt-fr-vi | d7a313fa61fa59ee759fd877759de83fd244dce8 | 2021-01-18T08:49:25.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fr",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fr-vi | 85 | null | transformers | 4,894 | ---
language:
- fr
- vi
tags:
- translation
license: apache-2.0
---
### fra-vie
* source group: French
* target group: Vietnamese
* OPUS readme: [fra-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.vie | 31.1 | 0.486 |
### System Info:
- hf_name: fra-vie
- source_languages: fra
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'vi']
- src_constituents: {'fra'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: vie
- short_pair: fr-vi
- chrF2_score: 0.486
- bleu: 31.1
- brevity_penalty: 0.985
- ref_len: 13219.0
- src_name: French
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: vi
- prefer_old: False
- long_pair: fra-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KrishParikh/gpt2_imdb_movie_plots | ab4a551b10ca184133e3c2a3e213b71d3495f65c | 2021-11-21T20:11:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | KrishParikh | null | KrishParikh/gpt2_imdb_movie_plots | 85 | null | transformers | 4,895 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-plot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-plot
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0
- Datasets 1.15.1
- Tokenizers 0.10.3
|
SkolkovoInstitute/roberta_toxicity_classifier_v1 | 0aeddbf4acc227b80ca3a5e2409c26b0988639a2 | 2021-11-02T18:36:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"arxiv:1911.00536",
"transformers"
] | text-classification | false | SkolkovoInstitute | null | SkolkovoInstitute/roberta_toxicity_classifier_v1 | 85 | null | transformers | 4,896 | This model is a clone of [SkolkovoInstitute/roberta_toxicity_classifier](https://huggingface.co/SkolkovoInstitute/roberta_toxicity_classifier) trained on a disjoint dataset.
While `roberta_toxicity_classifier` is used for evaluation of detoxification algorithms, `roberta_toxicity_classifier_v1` can be used within these algorithms, as in the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536). |
TODBERT/TOD-DistilBERT-JNT-V1 | c21cde3992721781e96604a7030de7bff81dc663 | 2020-08-26T18:39:56.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | TODBERT | null | TODBERT/TOD-DistilBERT-JNT-V1 | 85 | null | transformers | 4,897 | Entry not found |
YituTech/conv-bert-medium-small | 4889125682c36c29e56dd0a70717c8706ef1333a | 2021-02-24T11:24:27.000Z | [
"pytorch",
"tf",
"convbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | YituTech | null | YituTech/conv-bert-medium-small | 85 | null | transformers | 4,898 | Entry not found |
dbernsohn/roberta-python | cb9ceb18059e5edec8431480d2a21f749b5b4fca | 2021-05-20T15:57:13.000Z | [
"pytorch",
"jax",
"roberta",
"fill-mask",
"python",
"dataset:code_search_net",
"arxiv:1907.11692",
"transformers",
"autotrain_compatible"
] | fill-mask | false | dbernsohn | null | dbernsohn/roberta-python | 85 | 3 | transformers | 4,899 | # roberta-python
---
language: python
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Python** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-python")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-python")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Python code.
```python
code = """
new_dict = {}
for k, v in my_dict.<mask>():
new_dict[k] = v**2
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('items', 0.7376779913902283),
# ('keys', 0.16238391399383545),
# ('values', 0.03965481370687485),
# ('iteritems', 0.03346433863043785),
# ('splitlines', 0.0032723243348300457)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.