modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AnonymousSub/declutr-emanuals-s10-SR | ab42891519600ce989a82810d1d27f71639d219f | 2021-10-05T11:21:10.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/declutr-emanuals-s10-SR | 4 | null | transformers | 17,700 | Entry not found |
AnonymousSub/declutr-model_wikiqa | c915d8180aba8551ccf9564b6a5daae155cffc61 | 2022-01-22T23:40:52.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/declutr-model_wikiqa | 4 | null | transformers | 17,701 | Entry not found |
AnonymousSub/declutr-s10-AR | 1c7784f7ef0c7ddab0c4f047c2449d065e8241c2 | 2021-10-03T04:57:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/declutr-s10-AR | 4 | null | transformers | 17,702 | Entry not found |
AnonymousSub/declutr-s10-SR | 6c9b48b3edb4a42cb37fe2382f9ee417713faab1 | 2021-10-05T12:09:55.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/declutr-s10-SR | 4 | null | transformers | 17,703 | Entry not found |
AnonymousSub/dummy_1 | 72d248566bd9c31b4303142dc7b2c802c35bf395 | 2021-11-03T04:54:19.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/dummy_1 | 4 | null | transformers | 17,704 | Entry not found |
AnonymousSub/hier_triplet_epochs_1_shard_10 | 2408f87a1459c0a1d6e11011e8229fbbd88891bf | 2022-01-04T08:14:15.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/hier_triplet_epochs_1_shard_10 | 4 | null | transformers | 17,705 | Entry not found |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa | 2827f6da850fa22a37c415bebe10fd5055bac93c | 2022-01-22T23:46:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa | 4 | null | transformers | 17,706 | Entry not found |
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa | f114a399c68ee63e817256d7bf660edeaf9b4525 | 2022-01-23T01:42:47.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa | 4 | null | transformers | 17,707 | Entry not found |
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1_wikiqa | 4589d2f4ce1ef56ff99a06edd15ea28f3b529ae5 | 2022-01-23T07:48:22.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | AnonymousSub | null | AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1_wikiqa | 4 | null | transformers | 17,708 | Entry not found |
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0 | 623372de34aa1ce3ce86ac98a428bd3efd1c85ed | 2022-01-18T03:22:36.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0 | 4 | null | transformers | 17,709 | Entry not found |
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | 3470490bf6a22158a282c3b58331f9d4c0277333 | 2022-01-05T10:20:08.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | 4 | null | transformers | 17,710 | Entry not found |
Arnold/wav2vec2-hausa2-demo-colab | f7924ab0871c0e9e42e47494670d7aea0a1e1da2 | 2022-02-13T01:24:29.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Arnold | null | Arnold/wav2vec2-hausa2-demo-colab | 4 | null | transformers | 17,711 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-hausa2-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hausa2-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2032
- Wer: 0.7237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1683 | 12.49 | 400 | 1.0279 | 0.7211 |
| 0.0995 | 24.98 | 800 | 1.2032 | 0.7237 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Ayah/GPT2-DBpedia | 9c448b3a659e0b667b9a3112a0eb229a194a630e | 2022-01-30T07:32:41.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Ayah | null | Ayah/GPT2-DBpedia | 4 | null | transformers | 17,712 | Entry not found |
AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2 | a58669cd7df42be1c7c82d5c1b32c205b1599ea8 | 2021-10-21T19:08:11.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | question-answering | false | AyushPJ | null | AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2 | 4 | null | transformers | 17,713 | ---
tags:
- generated_from_trainer
model-index:
- name: ai-club-inductions-21-nlp-roBERTa-base-squad-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-club-inductions-21-nlp-roBERTa-base-squad-v2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Azaghast/DistilBART-SCP-ParaSummarization | 1e8ad1c9629548a502a7b044ea20483ca3b22e99 | 2021-08-25T09:49:44.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Azaghast | null | Azaghast/DistilBART-SCP-ParaSummarization | 4 | null | transformers | 17,714 | Entry not found |
BME-TMIT/foszt2oszt | 8ad158f4f1d1d758d5d286533ee5aa63a10ef11a | 2022-01-07T16:10:24.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"hu",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | BME-TMIT | null | BME-TMIT/foszt2oszt | 4 | 1 | transformers | 17,715 | ---
language: hu
metrics: rouge
---
[Paper](https://hlt.bme.hu/en/publ/foszt2oszt)
We publish an abstractive summarizer for Hungarian, an
encoder-decoder model initialized with [huBERT](huggingface.co/SZTAKI-HLT/hubert-base-cc), and fine-tuned on the
[ELTE.DH](https://elte-dh.hu/) corpus of former Hungarian news portals. The model produces fluent output in the correct topic, but it hallucinates frequently.
Our quantitative evaluation on automatic and human transcripts of news
(with automatic and human-made punctuation, [Tündik et al. (2019)](https://www.isca-speech.org/archive/interspeech_2019/tundik19_interspeech.html), [Tündik and Szaszák (2019)](https://www.isca-speech.org/archive/interspeech_2019/szaszak19_interspeech.html)) shows that the model is
robust with respect to errors in either automatic speech recognition or
automatic punctuation restoration. In fine-tuning and inference, we followed [a jupyter notebook by Patrick von
Platen](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb). Most hyper-parameters are the same as those by von Platen, but we
found it advantageous to change the minimum length of the summary to 8 word-
pieces (instead of 56), and the number of beams in beam search to 5 (instead
of 4). Our model was fine-tuned on a server of the [SZTAKI-HLT](hlt.bme.hu/) group, which kindly
provided access to it. |
BenWitter/DialoGPT-small-Tyrion | 4e897f7bbce4404407f2d50732467dd66350fd84 | 2021-09-20T17:39:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BenWitter | null | BenWitter/DialoGPT-small-Tyrion | 4 | null | transformers | 17,716 | \ntags:
-conversational
inference: false
conversational: true
#First time chat bot using a guide, low epoch count due to limited resources. |
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab | 07764ede835930a7cc40f69eda072d056d136e1f | 2021-11-23T09:32:23.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Bharathdamu | null | Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab | 4 | null | transformers | 17,717 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
BigSalmon/FormalBerta3 | 86b989161f549a417563cea263422eb8f87cf490 | 2021-12-02T00:20:12.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | BigSalmon | null | BigSalmon/FormalBerta3 | 4 | null | transformers | 17,718 | https://huggingface.co/spaces/BigSalmon/MASK2 |
BigSalmon/GPTT | 4c72f196e8692b25ef0f033d3ff865126f9bc2b5 | 2021-10-02T23:55:43.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/GPTT | 4 | null | transformers | 17,719 | Entry not found |
BlightZz/MakiseKurisu | 553ae6dc0ebe05d887df9d2caddf7b6abf2f5562 | 2021-07-01T19:02:26.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BlightZz | null | BlightZz/MakiseKurisu | 4 | null | transformers | 17,720 | ---
tags:
- conversational
---
# A small model based on the character Makise Kurisu from Steins;Gate. This was made as a test.
# A new medium model was made using her lines, I also added some fixes. It can be found here:
# https://huggingface.co/BlightZz/DialoGPT-medium-Kurisu |
BonjinKim/dst_kor_bert | 995b9e1adb5db23ec8fdf23397d01e938579122f | 2021-05-19T05:35:57.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | BonjinKim | null | BonjinKim/dst_kor_bert | 4 | null | transformers | 17,721 | # Korean bert base model for DST
- This is ConversationBert for dsksd/bert-ko-small-minimal(base-module) + 5 datasets
- Use dsksd/bert-ko-small-minimal tokenizer
- 5 datasets
- tweeter_dialogue : xlsx
- speech : trn
- office_dialogue : json
- KETI_dialogue : txt
- WOS_dataset : json
```python
tokenizer = AutoTokenizer.from_pretrained("BonjinKim/dst_kor_bert")
model = AutoModel.from_pretrained("BonjinKim/dst_kor_bert")
``` |
BumBelDumBel/ZORK-AI-TEST | 55e764ecdddcb530a58f27b9e69ab36701541d24 | 2021-07-16T17:12:42.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | false | BumBelDumBel | null | BumBelDumBel/ZORK-AI-TEST | 4 | null | transformers | 17,722 | ---
license: mit
tags:
- generated_from_trainer
model_index:
- name: ZORK-AI-TEST
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ZORK-AI-TEST
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
CLAck/indo-mixed | 71d7128837b1e62559dfdb321e4d8a70bf517f72 | 2022-02-15T11:25:18.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"id",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | CLAck | null | CLAck/indo-mixed | 4 | 1 | transformers | 17,723 | ---
language:
- en
- id
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This model is pretrained on Chinese and Indonesian languages, and fine-tuned on Indonesian language.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/indo-mixed")
tokenizer = AutoTokenizer.from_pretrained("CLAck/indo-mixed")
# Download a tokenizer that can tokenize English since the model Tokenizer doesn't know anymore how to do it
# We used the one coming from the initial model
# This tokenizer is used to tokenize the input sentence
tokenizer_en = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
# These special tokens are needed to reproduce the original tokenizer
tokenizer_en.add_tokens(["<2zh>", "<2indo>"], special_tokens=True)
sentence = "The cat is on the table"
# This token is needed to identify the target language
input_sentence = "<2indo> " + sentence
translated = model.generate(**tokenizer_en(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
MIXED
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 24.2579 |
| 2.0 | 30.6287 |
| 3.0 | 34.4417 |
| 4.0 | 36.2577 |
| 5.0 | 37.3488 |
FINETUNING
| Epoch | Bleu |
|:-----:|:-------:|
| 6.0 | 34.1676 |
| 7.0 | 35.2320 |
| 8.0 | 36.7110 |
| 9.0 | 37.3195 |
| 10.0 | 37.9461 | |
CLAck/vi-en | 9144e1b986723d126a844b525e8e8656efabd513 | 2022-02-15T11:33:16.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"vi",
"dataset:ALT",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | CLAck | null | CLAck/vi-en | 4 | null | transformers | 17,724 | ---
language:
- en
- vi
tags:
- translation
license: apache-2.0
datasets:
- ALT
metrics:
- sacrebleu
---
This is a finetuning of a MarianMT pretrained on Chinese-English. The target language pair is Vietnamese-English.
### Example
```
%%capture
!pip install transformers transformers[sentencepiece]
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Download the pretrained model for English-Vietnamese available on the hub
model = AutoModelForSeq2SeqLM.from_pretrained("CLAck/vi-en")
tokenizer = AutoTokenizer.from_pretrained("CLAck/vi-en")
sentence = your_vietnamese_sentence
# This token is needed to identify the source language
input_sentence = "<2vi> " + sentence
translated = model.generate(**tokenizer(input_sentence, return_tensors="pt", padding=True))
output_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
```
### Training results
| Epoch | Bleu |
|:-----:|:-------:|
| 1.0 | 21.3180 |
| 2.0 | 26.8012 |
| 3.0 | 29.3578 |
| 4.0 | 31.5178 |
| 5.0 | 32.8740 |
|
CLTL/icf-levels-mbw | 8792f07ac8397ec5b5e2914907d575222a2fa088 | 2021-11-08T12:21:31.000Z | [
"pytorch",
"roberta",
"text-classification",
"nl",
"transformers",
"license:mit"
] | text-classification | false | CLTL | null | CLTL/icf-levels-mbw | 4 | 1 | transformers | 17,725 | ---
language: nl
license: mit
pipeline_tag: text-classification
inference: false
---
# Regression Model for Weight Maintenance Functioning Levels (ICF b530)
## Description
A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
## Functioning levels
Level | Meaning
---|---
4 | Healthy weight, no unintentional weight loss or gain, SNAQ 0 or 1.
3 | Some unintentional weight loss or gain, or lost a lot of weight but gained some of it back afterwards.
2 | Moderate unintentional weight loss or gain (more than 3 kg in the last month), SNAQ 2.
1 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months), SNAQ ≥ 3.
0 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months) and admitted to ICU.
The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
## Intended uses and limitations
- The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records).
- The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
## How to use
To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library:
```
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
'roberta',
'CLTL/icf-levels-mbw',
use_cuda=False,
)
example = 'Tijdens opname >10 kg afgevallen.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)
```
The prediction on the example is:
```
1.95
```
The raw outputs look like this:
```
[[1.95429301]]
```
## Training data
- The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released.
- The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
## Training procedure
The default training parameters of Simple Transformers were used, including:
- Optimizer: AdamW
- Learning rate: 4e-5
- Num train epochs: 1
- Train batch size: 8
## Evaluation results
The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).
| | Sentence-level | Note-level
|---|---|---
mean absolute error | 0.81 | 0.60
mean squared error | 0.83 | 0.56
root mean squared error | 0.91 | 0.75
## Authors and references
### Authors
Jenia Kim, Piek Vossen
### References
TBD
|
CarlosPR/mt5-spanish-memmories-analysis | e9b9d60ddf8dad3c64c0a7be09db6f356edac8a5 | 2021-07-11T15:11:55.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | CarlosPR | null | CarlosPR/mt5-spanish-memmories-analysis | 4 | null | transformers | 17,726 | **mt5-spanish-memmories-analysis**
**// ES**
Este es un trabajo en proceso.
Este modelo aún es solo un punto de control inicial que mejoraré en los próximos meses.
El objetivo es proporcionar un modelo capaz de, utilizando una combinación de tareas del modelo mT5, comprender los recuerdos y proporcionar una interacción útil para las personas con alzeimer o personas como mi propio abuelo que escribió sus recuerdos, pero ahora es solo un libro en la estantería. por lo que este modelo puede hacer que esos recuerdos parezcan "vivos".
Pronto (si aún no está cargado) cargaré un cuaderno de **Google Colaboratory con una aplicación visual** que al usar este modelo proporcionará toda la interacción necesaria y deseada con una interfaz fácil de usar.
**LINK APLICACIÓN (sobre él se actualizará la versión):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing
-> Debe descargarse la carpeta "memorium" del enlace y subirse a Google Drive sin incluir en ninguna otra carpeta (directamente en "Mi unidad").
-> A continuación se podrá abrir la app, encontrada dentro de dicha carpeta "memorium" con nombre "APP-Memorium" (el nombre puede incluir además un indicador de versión).
-> Si haciendo doble click en el archivo de la app no permite abrirla, debe hacerse pulsando el botón derecho sobre el archivo y seleccionar "Abrir con", "Conectar más aplicaciones", y a continuación escoger Colaboratory (se pedirá instalar). Completada la instalación (tiempo aproximado: 2 minutos) se podrá cerrar la ventana de instalación para volver a visualizar la carpeta donde se encuentra el fichero de la app, que de ahora en adelante se podrá abrir haciendo doble click.
-> Se podrán añadir memorias en la carpeta "perfiles" como se indica en la aplicación en el apartado "crear perfil".
**// EN**
This is a work in process.
This model is just an initial checkpoint yet that I will be improving the following months.
**APP LINK (it will contain the latest version):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing
-> The folder "memorium" must be downloaded and then uploaded to Google Drive at "My Drive", NOT inside any other folder.
The aim is to provide a model able to, using a mixture of mT5 model's tasks, understand memories and provide an interaction useful for people with alzeimer or people like my own grandfather who wrote his memories but it is now just a book in the shelf, so this model can make those memories seem 'alive'.
I will soon (if it is´t uploaded by now) upload a **Google Colaboratory notebook with a visual App** that using this model will provide all the needed and wanted interaction with an easy-to-use Interface.
|
CenIA/albert-large-spanish-finetuned-xnli | 84a4bc2c62b369a19fed311ec452c2df40b7749d | 2021-12-12T03:44:52.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-large-spanish-finetuned-xnli | 4 | null | transformers | 17,727 | Entry not found |
CenIA/albert-xlarge-spanish-finetuned-mldoc | 04d8633a0a96d7c3d6c36aa9b803d24365999ec6 | 2022-01-11T04:58:11.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-mldoc | 4 | null | transformers | 17,728 | Entry not found |
CenIA/albert-xlarge-spanish-finetuned-pawsx | 7ba6eade81ea19c30aba4ba1b7a96afd7fd8e655 | 2022-01-03T17:56:40.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-pawsx | 4 | null | transformers | 17,729 | Entry not found |
CenIA/albert-xlarge-spanish-finetuned-xnli | dbbbd1255a24da7c87bdcef1597cd6f627d081d3 | 2021-12-12T03:57:48.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-xlarge-spanish-finetuned-xnli | 4 | null | transformers | 17,730 | Entry not found |
CenIA/albert-xxlarge-spanish-finetuned-mldoc | 1bef7cf6bb102c7199bc4ae6a9c2adf96c919062 | 2022-01-12T13:00:28.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-mldoc | 4 | null | transformers | 17,731 | Entry not found |
CenIA/albert-xxlarge-spanish-finetuned-pawsx | 514f890d3e4819d08b135a6a93f506a86e1d2f79 | 2022-01-06T04:05:17.000Z | [
"pytorch",
"albert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/albert-xxlarge-spanish-finetuned-pawsx | 4 | null | transformers | 17,732 | Entry not found |
CenIA/bert-base-spanish-wwm-cased-finetuned-pawsx | 2b721ed53fcd6ef1618640ef70c30867e6535195 | 2022-01-03T22:28:02.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/bert-base-spanish-wwm-cased-finetuned-pawsx | 4 | null | transformers | 17,733 | Entry not found |
CenIA/distillbert-base-spanish-uncased-finetuned-pawsx | 8746b2a66e3a49f023f9e14f9bc166ab882e392b | 2022-01-04T21:31:35.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-pawsx | 4 | null | transformers | 17,734 | Entry not found |
CenIA/distillbert-base-spanish-uncased-finetuned-qa-mlqa | 60e143f6bf36ab7bbc919c39617f38cd4d9abfcd | 2022-01-18T22:02:21.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | CenIA | null | CenIA/distillbert-base-spanish-uncased-finetuned-qa-mlqa | 4 | null | transformers | 17,735 | Entry not found |
CennetOguz/distilbert-base-uncased-finetuned-imdb | 77edd47f11c46ac35f1b3c6acecb685b83ae52cb | 2022-02-17T17:18:06.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | CennetOguz | null | CennetOguz/distilbert-base-uncased-finetuned-imdb | 4 | null | transformers | 17,736 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5187
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 2.5483 |
| No log | 2.0 | 80 | 2.4607 |
| No log | 3.0 | 120 | 2.5474 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1 | e69ec17e1f52981bcf8bd66a7286e46a23dbc8df | 2022-02-21T22:12:47.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | CennetOguz | null | CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1 | 4 | null | transformers | 17,737 | Entry not found |
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | 695d10e61179941347eb707dae8028c747f2c542 | 2022-02-17T21:23:26.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | CennetOguz | null | CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | 4 | null | transformers | 17,738 | Entry not found |
Cheatham/xlm-roberta-base-finetuned | 48df48eff9fea0fe880642bda1eddf5ad8bc55c8 | 2022-01-27T10:49:20.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-base-finetuned | 4 | null | transformers | 17,739 | Entry not found |
Cheatham/xlm-roberta-large-finetuned-d1 | cd2ce45fabfd818dbf157bc56bfff42a2e8520c1 | 2022-01-27T12:22:59.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d1 | 4 | null | transformers | 17,740 | Entry not found |
Cheatham/xlm-roberta-large-finetuned-d12 | b9b0299b3eb9b78c41f5347c05cdf637bab3c3c3 | 2022-02-08T16:54:37.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned-d12 | 4 | null | transformers | 17,741 | Entry not found |
Cheatham/xlm-roberta-large-finetuned | 354f831c966b14d72591b9e0be2e64198e1edf5e | 2021-09-20T19:07:46.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned | 4 | null | transformers | 17,742 | Entry not found |
Cheatham/xlm-roberta-large-finetuned3 | c2c3d4ff4d33ae2d364d8ede9d8749f1df991036 | 2022-01-27T10:26:35.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | Cheatham | null | Cheatham/xlm-roberta-large-finetuned3 | 4 | null | transformers | 17,743 | Entry not found |
CianB/Reed | d90adfdca20b40b70242ebc1bc05c5463d5766ff | 2021-08-27T18:13:32.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | CianB | null | CianB/Reed | 4 | null | transformers | 17,744 | ---
tags:
- conversational
---
# Reed |
CleveGreen/FieldClassifier_v2 | da4315aa4cec7f0fbe9df130a514f3a80bd1dab0 | 2022-02-04T17:36:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | CleveGreen | null | CleveGreen/FieldClassifier_v2 | 4 | 1 | transformers | 17,745 | Entry not found |
CleveGreen/FieldClassifier_v2_gpt | be0a2428690ab693ed8f8280b7e62c1f50cfac7f | 2022-02-16T19:24:10.000Z | [
"pytorch",
"gpt2",
"text-classification",
"transformers"
] | text-classification | false | CleveGreen | null | CleveGreen/FieldClassifier_v2_gpt | 4 | null | transformers | 17,746 | Entry not found |
Contrastive-Tension/BERT-Distil-CT-STSb | b9ffe4b45ae9bfe2ccdcec3e35a84ad4b6a9074d | 2021-02-23T19:38:16.000Z | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | Contrastive-Tension | null | Contrastive-Tension/BERT-Distil-CT-STSb | 4 | null | transformers | 17,747 | Entry not found |
CrisLeaf/generador-de-historias-de-tolkien | 20656410cab68d19c48424985613c5d1438b8bbe | 2022-01-18T02:57:12.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | CrisLeaf | null | CrisLeaf/generador-de-historias-de-tolkien | 4 | null | transformers | 17,748 | hello
|
DCU-NLP/bert-base-irish-cased-v1 | 63f70eed1862b3a35c344fd4bd4dcf6ba194c366 | 2022-06-29T15:30:00.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"arxiv:2107.12930",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
] | fill-mask | false | DCU-NLP | null | DCU-NLP/bert-base-irish-cased-v1 | 4 | null | transformers | 17,749 | ---
tags:
- generated_from_keras_callback
model-index:
- name: bert-base-irish-cased-v1
results: []
widget:
- text: "Ceoltóir [MASK] ab ea Johnny Cash."
---
# bert-base-irish-cased-v1
[gaBERT](https://arxiv.org/abs/2107.12930) is a BERT-base model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper.
## Model description
Encoder-based Transformer to be used to obtain features for finetuning for downstream tasks in Irish.
## Intended uses & limitations
Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DCU-NLP/electra-base-irish-cased-discriminator-v1 | f88b98179fb553bcf99e466c7dcc00a3d435ce92 | 2021-11-15T18:03:16.000Z | [
"pytorch",
"electra",
"pretraining",
"ga",
"arxiv:2107.12930",
"transformers",
"irish",
"license:apache-2.0"
] | null | false | DCU-NLP | null | DCU-NLP/electra-base-irish-cased-discriminator-v1 | 4 | null | transformers | 17,750 | ---
language:
- ga
license: apache-2.0
tags:
- irish
- electra
widget:
- text: "Ceoltóir [MASK] ab ea Johnny Cash."
---
# gaELECTRA
[gaELECTRA](https://arxiv.org/abs/2107.12930) is an ELECTRA model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper. For fine-tuning this model on a token classification task, e.g. Named Entity Recognition, use the discriminator model.
### Limitations and bias
Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
### BibTeX entry and citation info
If you use this model in your research, please consider citing our paper:
```
@article{DBLP:journals/corr/abs-2107-12930,
author = {James Barry and
Joachim Wagner and
Lauren Cassidy and
Alan Cowap and
Teresa Lynn and
Abigail Walsh and
M{\'{\i}}che{\'{a}}l J. {\'{O}} Meachair and
Jennifer Foster},
title = {gaBERT - an Irish Language Model},
journal = {CoRR},
volume = {abs/2107.12930},
year = {2021},
url = {https://arxiv.org/abs/2107.12930},
archivePrefix = {arXiv},
eprint = {2107.12930},
timestamp = {Fri, 30 Jul 2021 13:03:06 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-12930.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
DSI/TweetBasedSA | 537daa619bed898f1592e3669c56ad8bc3bbc697 | 2021-12-05T08:51:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | DSI | null | DSI/TweetBasedSA | 4 | null | transformers | 17,751 | Entry not found |
DataikuNLP/distiluse-base-multilingual-cased-v1 | 80cd39db49377b868748b227d6f3bac677bf5e6a | 2021-09-02T08:25:03.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | DataikuNLP | null | DataikuNLP/distiluse-base-multilingual-cased-v1 | 4 | null | sentence-transformers | 17,752 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# DataikuNLP/distiluse-base-multilingual-cased-v1
**This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) from sentence-transformers at the specific commit `3a706e4d65c04f868c4684adfd4da74141be8732`.**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v1)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
Davlan/bert-base-multilingual-cased-finetuned-luo | e6416027901f8cd338782bad4ad0a564f75c6e95 | 2021-06-30T21:10:52.000Z | [
"pytorch",
"bert",
"fill-mask",
"luo",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-luo | 4 | null | transformers | 17,753 | Hugging Face's logo
---
language: luo
datasets:
---
# bert-base-multilingual-cased-finetuned-luo
## Model description
**bert-base-multilingual-cased-finetuned-luo** is a **Luo BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Luo language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Luo corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-luo')
>>> unmasker("Obila ma Changamwe [MASK] pedho achije angwen mag njore")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | luo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 74.22 | 75.59
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-naija | c09d913ba4fab8a9f82371469dc99d36de81c380 | 2021-06-15T20:39:28.000Z | [
"pytorch",
"bert",
"fill-mask",
"pcm",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-naija | 4 | null | transformers | 17,754 | Hugging Face's logo
---
language: pcm
datasets:
---
# bert-base-multilingual-cased-finetuned-naija
## Model description
**bert-base-multilingual-cased-finetuned-naija** is a **Nigerian-Pidgin BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Nigerian-Pidgin language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Nigerian-Pidgin corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-naija')
>>> unmasker("Another attack on ambulance happen for Koforidua in March [MASK] year where robbers kill Ambulance driver")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on JW300 + [BBC Pidgin](https://www.bbc.com/pidgin)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | pcm_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.23 | 89.95
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-finetuned-wolof | 682ea796df2d59fbf8e1ef75ef8fca6d37532355 | 2021-06-30T15:50:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"wo",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Davlan | null | Davlan/bert-base-multilingual-cased-finetuned-wolof | 4 | null | transformers | 17,755 | Hugging Face's logo
---
language: wo
datasets:
---
# bert-base-multilingual-cased-finetuned-wolof
## Model description
**bert-base-multilingual-cased-finetuned-wolof** is a **Wolof BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Wolof language texts. It provides **better performance** than the multilingual BERT on named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Wolof corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-wolof')
>>> unmasker("Màkki Sàll feeñal na ay xalaatam ci mbir yu am solo yu soxal [MASK] ak Afrik.")
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Bible OT](http://biblewolof.com/) + [OPUS](https://opus.nlpl.eu/) + News Corpora (Lu Defu Waxu, Saabal, and Wolof Online)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | wo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 64.52 | 69.43
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/byt5-base-yor-eng-mt | 702c112405b3a5712b785c6399ca4d81d486c55b | 2021-08-08T21:58:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"yo",
"en",
"dataset:JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Davlan | null | Davlan/byt5-base-yor-eng-mt | 4 | 1 | transformers | 17,756 | Hugging Face's logo
---
language:
- yo
- en
datasets:
- JW300 + [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
---
# byt5-base-yor-eng-mt
## Model description
**byt5-base-yor-eng-mt** is a **machine translation** model from Yorùbá language to English language based on a fine-tuned byt5-base model. It establishes a **strong baseline** for automatically translating texts from Yorùbá to English.
Specifically, this model is a *byt5-base* model that was fine-tuned on JW300 Yorùbá corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt)
#### Limitations and bias
This model is limited by its training dataset. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on on JW300 corpus and [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt) dataset
## Training procedure
This model was trained on NVIDIA V100 GPU
## Eval results on Test set (BLEU score)
Fine-tuning byt5-base achieves 14.05 BLEU on [Menyo-20k test set](https://arxiv.org/abs/2103.08647) while mt5-base achieves 15.57
### BibTeX entry and citation info
By David Adelani
```
```
|
Declan/FoxNews_model_v1 | 5273cf55d76ccd789f7886abecc85df5d335cecb | 2021-12-12T23:21:35.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/FoxNews_model_v1 | 4 | null | transformers | 17,757 | Entry not found |
Declan/FoxNews_model_v3 | 1a4c1ce9d75287f4a6ea5aad7f09d0299882f0c0 | 2021-12-15T14:38:20.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/FoxNews_model_v3 | 4 | null | transformers | 17,758 | Entry not found |
Declan/HuffPost_model_v8 | 77bd3d5477598ea51076236207f3cd3ccce1a168 | 2021-12-19T23:17:15.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/HuffPost_model_v8 | 4 | null | transformers | 17,759 | Entry not found |
Declan/NPR_model_v8 | f3048011dc7ead354c31cddbd7d82aca8d40a95d | 2021-12-19T23:45:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NPR_model_v8 | 4 | null | transformers | 17,760 | Entry not found |
Declan/NewYorkTimes_model_v8 | f0b738266b37585391513ac4328a968ca7b955b3 | 2021-12-20T00:14:43.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/NewYorkTimes_model_v8 | 4 | null | transformers | 17,761 | Entry not found |
Declan/Politico_model_v1 | a19fd6ee62841caba89e678598d38e1b27e72d8a | 2021-12-14T04:22:06.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v1 | 4 | null | transformers | 17,762 | Entry not found |
Declan/Politico_model_v8 | d0aa550d2442a6ef996b03c57dd149b8c61357e9 | 2021-12-20T00:47:49.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/Politico_model_v8 | 4 | null | transformers | 17,763 | Entry not found |
Declan/WallStreetJournal_model_v8 | 561152ab789aa86138165de55013654dbe127118 | 2021-12-20T03:11:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | Declan | null | Declan/WallStreetJournal_model_v8 | 4 | null | transformers | 17,764 | Entry not found |
DeltaHub/adapter_t5-3b_qnli | d7b829736f5a6fd39be78ff2b56dd140b236a91e | 2022-02-12T08:53:17.000Z | [
"pytorch",
"transformers"
] | null | false | DeltaHub | null | DeltaHub/adapter_t5-3b_qnli | 4 | null | transformers | 17,765 | Entry not found |
Doogie/Waynehills-STT-doogie | 8f44499387b42a632623190e41316f37e858fd78 | 2021-12-16T01:25:27.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Doogie | null | Doogie/Waynehills-STT-doogie | 4 | null | transformers | 17,766 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
name: Waynehills-STT-doogie
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Waynehills-STT-doogie
This model is a fine-tuned version of [Doogie/Waynehills-STT-doogie](https://huggingface.co/Doogie/Waynehills-STT-doogie) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.8180
- eval_wer: 0.9103
- eval_runtime: 25.2323
- eval_samples_per_second: 5.747
- eval_steps_per_second: 0.753
- epoch: 8.45
- step: 14000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 | 2ce8555bcea6571146f9d34a438cad41ab516cc6 | 2022-03-24T11:54:47.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"or",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 | 4 | null | transformers | 17,767 | ---
language:
- or
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- or
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xls-r-300m-or-d5
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: or
metrics:
- name: Test WER
type: wer
value: 0.579136690647482
- name: Test CER
type: cer
value: 0.1572148018392818
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: or
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-or-d5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - OR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9571
- Wer: 0.5450
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset mozilla-foundation/common_voice_8_0 --config or --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-or-d5 --dataset speech-recognition-community-v2/dev_data --config or --split validation --chunk_length_s 10 --stride_length_s 1
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.2958 | 12.5 | 300 | 4.9014 | 1.0 |
| 3.4065 | 25.0 | 600 | 3.5150 | 1.0 |
| 1.5402 | 37.5 | 900 | 0.8356 | 0.7249 |
| 0.6049 | 50.0 | 1200 | 0.7754 | 0.6349 |
| 0.4074 | 62.5 | 1500 | 0.7994 | 0.6217 |
| 0.3097 | 75.0 | 1800 | 0.8815 | 0.5985 |
| 0.2593 | 87.5 | 2100 | 0.8532 | 0.5754 |
| 0.2097 | 100.0 | 2400 | 0.9077 | 0.5648 |
| 0.1784 | 112.5 | 2700 | 0.9047 | 0.5668 |
| 0.1567 | 125.0 | 3000 | 0.9019 | 0.5728 |
| 0.1315 | 137.5 | 3300 | 0.9295 | 0.5827 |
| 0.1125 | 150.0 | 3600 | 0.9256 | 0.5681 |
| 0.1035 | 162.5 | 3900 | 0.9148 | 0.5496 |
| 0.0901 | 175.0 | 4200 | 0.9480 | 0.5483 |
| 0.0817 | 187.5 | 4500 | 0.9799 | 0.5516 |
| 0.079 | 200.0 | 4800 | 0.9571 | 0.5450 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
EMBEDDIA/rubert-tweetsentiment | 0e20d4bc2a31b40c273e1ebb94861fc75bd23603 | 2021-07-09T14:36:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | EMBEDDIA | null | EMBEDDIA/rubert-tweetsentiment | 4 | null | transformers | 17,768 | Entry not found |
Ebtihal/AraBertMo_base_V3 | f4097315915711e4c3d6611c312fc073224a17df | 2022-03-15T19:13:38.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"dataset:OSCAR",
"transformers",
"Fill-Mask",
"autotrain_compatible"
] | fill-mask | false | Ebtihal | null | Ebtihal/AraBertMo_base_V3 | 4 | null | transformers | 17,769 | ---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V3' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 3 | 64 | 1410 | 3h 10m 31s | 8.0201 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V3")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V3")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
Ebtihal/AraBertMo_base_V7 | 99fc7ed855c80bc13aa4ea198209423057ab86ef | 2022-03-16T13:27:40.000Z | [
"pytorch",
"bert",
"fill-mask",
"ar",
"dataset:OSCAR",
"transformers",
"Fill-Mask",
"autotrain_compatible"
] | fill-mask | false | Ebtihal | null | Ebtihal/AraBertMo_base_V7 | 4 | null | transformers | 17,770 | Arabic Model AraBertMo_base_V7
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V7' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 50046| 7 | 64 | 5915 | 5h 23m 5s | 7.1381 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V7")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V7")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
EhsanAghazadeh/bert-based-uncased-sst2-e1 | e41395c69393fdcd5fe965b91c83f73d65f3c77e | 2022-01-02T08:30:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/bert-based-uncased-sst2-e1 | 4 | null | transformers | 17,771 | Entry not found |
EhsanAghazadeh/electra-base-avg-2e-5-lcc | 3c47de8e915fba530962570593a624af6334e518 | 2021-08-13T20:20:16.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/electra-base-avg-2e-5-lcc | 4 | null | transformers | 17,772 | Entry not found |
EhsanAghazadeh/electra-large-lcc-2e-5-42 | 77ff9bf3decfbdc0ff101ef5391eb868546a8210 | 2021-08-26T13:22:11.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/electra-large-lcc-2e-5-42 | 4 | null | transformers | 17,773 | Entry not found |
EhsanAghazadeh/xlm-roberta-base-lcc-fa-2e-5-42 | 5b352b6acba1e50da9ee9b31beb3e2246750d426 | 2021-08-21T18:37:49.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | EhsanAghazadeh | null | EhsanAghazadeh/xlm-roberta-base-lcc-fa-2e-5-42 | 4 | null | transformers | 17,774 | Entry not found |
EhsanAghazadeh/xlm-roberta-base-random-weights | 0cc2118b5fc3d38734baf66dbc3909dd2329e925 | 2021-08-28T21:27:29.000Z | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | EhsanAghazadeh | null | EhsanAghazadeh/xlm-roberta-base-random-weights | 4 | null | transformers | 17,775 | Entry not found |
Einmalumdiewelt/T5-Base_GNAD | af114658c2aa1907baec17768eb2156f682709f5 | 2022-06-11T06:22:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"de",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Einmalumdiewelt | null | Einmalumdiewelt/T5-Base_GNAD | 4 | null | transformers | 17,776 | ---
language:
- de
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5-Base_GNAD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5-Base_GNAD
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2144
- Rouge1: 26.012
- Rouge2: 7.0961
- Rougel: 18.1094
- Rougelsum: 22.507
- Gen Len: 55.018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Emily/fyp | 26241a2ffd2db16b9a9be6f1ff287c0101f96b16 | 2022-01-22T06:02:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Emily | null | Emily/fyp | 4 | null | transformers | 17,777 | Entry not found |
Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi | e11654f112c78f9bb0f9bd148e9d8e69347eccbc | 2021-11-16T00:32:57.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | Eugenia | null | Eugenia/roberta-base-bne-finetuned-amazon_reviews_multi | 4 | null | transformers | 17,778 | Entry not found |
GKLMIP/bert-laos-base-uncased | cd5c26f0e2221a2326bda48d8781595723d6f443 | 2021-07-31T06:12:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-laos-base-uncased | 4 | null | transformers | 17,779 | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos. |
GKLMIP/bert-laos-small-uncased | 6fb5cbc922c759734ef1a7c1cf0d0e12cdf0a338 | 2021-07-31T06:18:30.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/bert-laos-small-uncased | 4 | null | transformers | 17,780 | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
|
GKLMIP/electra-khmer-small-uncased-tokenized | 3ac6038c42cfe313eec893aec4b204369d4b014c | 2021-07-31T05:42:53.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/electra-khmer-small-uncased-tokenized | 4 | null | transformers | 17,781 | Entry not found |
GKLMIP/electra-laos-small-uncased | ff5c82ec50fecb177ab7a3c88e07b6da61eaa0c3 | 2021-07-31T06:36:30.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | GKLMIP | null | GKLMIP/electra-laos-small-uncased | 4 | null | transformers | 17,782 | The Usage of tokenizer for Lao is in https://github.com/GKLMIP/Pretrained-Models-For-Laos. |
GKLMIP/electra-myanmar-small-uncased | d579f2976f5e57340061c972f8e8b3d927b22803 | 2021-10-11T04:58:25.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | GKLMIP | null | GKLMIP/electra-myanmar-small-uncased | 4 | null | transformers | 17,783 | The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos.
If you use our model, please consider citing our paper:
```
@InProceedings{,
author="Jiang, Shengyi
and Huang, Xiuwen
and Cai, Xiaonan
and Lin, Nankai",
title="Pre-trained Models and Evaluation Data for the Myanmar Language",
booktitle="The 28th International Conference on Neural Information Processing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
``` |
GPL/scifact-tsdae-msmarco-distilbert-margin-mse | bdaad2db323cc68616f573a5d5df877e03ddd76d | 2022-04-19T16:48:19.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | GPL | null | GPL/scifact-tsdae-msmarco-distilbert-margin-mse | 4 | null | transformers | 17,784 | Entry not found |
Geotrend/bert-base-25lang-cased | b10b5bc87455fc3df80fe4ac2b0af7fb74f91d28 | 2021-05-18T18:46:59.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-25lang-cased | 4 | 1 | transformers | 17,785 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-25lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur, sw, nl, uk, ro, pt, it, lt, no, pl, da and ja.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-25lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-25lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-bg-cased | 7a81f736caeca4731f48144c9126c9693e8f0fff | 2021-05-18T19:02:31.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-bg-cased | 4 | null | transformers | 17,786 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-bg-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-bg-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-bg-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/bert-base-en-es-cased | 11c4b8b5e0fc596369270376311fcebc29b2caae | 2021-05-18T19:08:56.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-es-cased | 4 | null | transformers | 17,787 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/bert-base-en-fr-es-cased | f0e99c720bcb934c0c6a12f6d8a7993a699eeaf5 | 2021-05-18T19:21:01.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-es-cased | 4 | null | transformers | 17,788 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-fr-it-cased | 05cd253dbc0d8571edc7719f7dcd645e2be47657 | 2021-05-18T19:24:24.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-it-cased | 4 | null | transformers | 17,789 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-it-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-fr-nl-ru-ar-cased | 30be137f7e2234dce4fbacfbc0d5defbd5c7c82b | 2021-05-18T19:26:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-nl-ru-ar-cased | 4 | null | transformers | 17,790 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-nl-ru-ar-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-nl-ru-ar-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-nl-ru-ar-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-fr-zh-ja-vi-cased | e3a3a45e0cd8d7741dc8e49c9eaeaf4f27370c01 | 2021-05-18T19:30:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-fr-zh-ja-vi-cased | 4 | null | transformers | 17,791 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-fr-zh-ja-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-zh-ja-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-pt-cased | 836c3e3a885e3b9a9992dfba8770c0ccab81559c | 2021-05-18T19:42:48.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-pt-cased | 4 | null | transformers | 17,792 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# bert-base-en-pt-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-pt-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-pt-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/bert-base-en-ur-cased | 8d472fb7a9d1406dde861b54c0935f946fd32deb | 2021-05-18T19:50:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-ur-cased | 4 | null | transformers | 17,793 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-ur-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-ur-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-ur-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/bert-base-en-vi-cased | 8b601f2b2c5eb89357a6de223692e1ad0ec3eff1 | 2021-05-18T19:51:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-en-vi-cased | 4 | null | transformers | 17,794 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
---
# bert-base-en-vi-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-vi-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-vi-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Geotrend/distilbert-base-en-de-cased | ca6b2e351dc15b3dd38c9a70ccb2b01daaacff96 | 2021-08-16T13:55:29.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-de-cased | 4 | null | transformers | 17,795 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-de-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-de-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-de-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-es-it-cased | e813e566d4180a2a98cfe4473a8933b55c394a51 | 2021-07-27T20:18:12.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-es-it-cased | 4 | null | transformers | 17,796 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-es-it-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-es-it-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-es-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-fr-it-cased | fae009835bf1996017bac8ad61ccbe10f5b9ca17 | 2021-07-27T22:24:51.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-fr-it-cased | 4 | null | transformers | 17,797 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-fr-it-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-fr-it-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-fr-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-it-cased | 67670503458f536da6e2c0a143f690704f2c2042 | 2021-07-27T21:32:50.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-it-cased | 4 | null | transformers | 17,798 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-it-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-it-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-it-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Geotrend/distilbert-base-en-ja-cased | 0f9e21a7a2eae9b99ba5e8563a474c398b4a5b2a | 2021-07-27T13:31:25.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/distilbert-base-en-ja-cased | 4 | null | transformers | 17,799 | ---
language: multilingual
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-ja-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-ja-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-ja-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.