modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
moumeneb1/testing | 6cbd9df758093c988a3d7c41fba96790b081b3d0 | 2022-01-11T09:16:45.000Z | [
"wav2vec2",
"feature-extraction",
"rw",
"dataset:commonvoice",
"arxiv:2106.04624",
"speechbrain",
"CTC",
"Attention",
"pytorch",
"Transformer",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | moumeneb1 | null | moumeneb1/testing | 1 | null | speechbrain | 30,000 | ---
language: "rw"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- speechbrain
- Transformer
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on CommonVoice Kinyarwanda (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (Kinyarwanda Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:--------------:|:--------------:| :--------:|
| 03-06-21 | 18.91 | 2xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (RW).
- Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on CommonVoice En.
The obtained final acoustic representation is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Kinyarwanda)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-rw", savedir="pretrained_models/asr-wav2vec2-commonvoice-rw")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-rw/example.mp3")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/seq2seq
python train_with_wav2vec.py hparams/train_rw_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
mpoyraz/wav2vec2-xls-r-300m-cv7-turkish | 708639f50559d7970f462e13ec64d3f059ca89f6 | 2022-03-23T18:28:32.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"license:cc-by-4.0",
"model-index"
] | automatic-speech-recognition | false | mpoyraz | null | mpoyraz/wav2vec2-xls-r-300m-cv7-turkish | 1 | null | transformers | 30,001 | ---
license: cc-by-4.0
language: tr
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
- tr
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: mpoyraz/wav2vec2-xls-r-300m-cv7-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: tr
metrics:
- name: Test WER
type: wer
value: 8.62
- name: Test CER
type: cer
value: 2.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: tr
metrics:
- name: Test WER
type: wer
value: 30.87
- name: Test CER
type: cer
value: 10.69
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: tr
metrics:
- name: Test WER
type: wer
value: 32.09
---
# wav2vec2-xls-r-300m-cv7-turkish
## Model description
This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language.
## Training and evaluation data
The following datasets were used for finetuning:
- [Common Voice 7.0 TR](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) All `validated` split except `test` split was used for training.
- [MediaSpeech](https://www.openslr.org/108/)
## Training procedure
To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose.
### Training hyperparameters
The following hypermaters were used for finetuning:
- learning_rate 2e-4
- num_train_epochs 10
- warmup_steps 500
- freeze_feature_extractor
- mask_time_prob 0.1
- mask_feature_prob 0.05
- feat_proj_dropout 0.05
- attention_dropout 0.05
- final_dropout 0.05
- activation_dropout 0.05
- per_device_train_batch_size 8
- per_device_eval_batch_size 8
- gradient_accumulation_steps 8
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
## Language Model
N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format.
## Evaluation Commands
Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing.
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv7-turkish --dataset mozilla-foundation/common_voice_7_0 --config tr --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv7-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Evaluation results:
| Dataset | WER | CER |
|---|---|---|
|Common Voice 7 TR test split| 8.62 | 2.26 |
|Speech Recognition Community dev data| 30.87 | 10.69 |
|
mrm8488/GuaPeTe-2-tiny-finetuned-eubookshop | 6b7e11344f86ea4a4cfd47f966d4c23b2ce70892 | 2021-05-23T10:15:52.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"es",
"transformers",
"spanish",
"gpt-2"
] | text-generation | false | mrm8488 | null | mrm8488/GuaPeTe-2-tiny-finetuned-eubookshop | 1 | null | transformers | 30,002 |
---
language: es
tags:
- spanish
- gpt-2
widget:
- text: "El objetivo de la Unión Europea es"
---
# GuaPeTe-2-tiny fine-tuned on eubookshop dataset for CLM |
mrm8488/GuaPeTe-2-tiny | 1d9fd2d951421a4678cd68bc092635269464d1c0 | 2021-05-23T10:17:59.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"es",
"transformers",
"spanish",
"gpt-2",
"spanish gpt2"
] | text-generation | false | mrm8488 | null | mrm8488/GuaPeTe-2-tiny | 1 | null | transformers | 30,003 | ---
language: es
tags:
- spanish
- gpt-2
- spanish gpt2
widget:
- text: "Murcia es la huerta de Europa porque"
---
# GuaPeTe-2-tiny: A proof of concept tiny GPT-2 like model trained on Spanish Wikipedia corpus
|
mrm8488/RuPERTa-base-finetuned-squadv2 | 8041b75737a070b9384f36417fdf88a5832ecd1b | 2021-05-20T18:14:42.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"es",
"dataset:squad_v2",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/RuPERTa-base-finetuned-squadv2 | 1 | null | transformers | 30,004 | ---
language: es
datasets:
- squad_v2
---
|
mrm8488/byt5-small-finetuned-tweet-qa | ac4b0d1c8e1494253179c0103247aa1f251c9d4f | 2021-06-23T12:37:23.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/byt5-small-finetuned-tweet-qa | 1 | null | transformers | 30,005 | Entry not found |
mrm8488/codebert2codebert-finetuned-code-refinement | 189002e253e1443b672286480129a44de9e6cbe0 | 2021-06-11T10:30:26.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/codebert2codebert-finetuned-code-refinement | 1 | null | transformers | 30,006 | Entry not found |
mrm8488/distilroberta-finetuned-squadv1 | c0846fed86ce606538e41bcfc7ff9e9175062519 | 2021-05-20T18:24:24.000Z | [
"pytorch",
"jax",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/distilroberta-finetuned-squadv1 | 1 | null | transformers | 30,007 | Entry not found |
mrm8488/electra-small-finetuned-squadv1 | ca872f41563e92907289f224a08d9a0b8cc46567 | 2020-12-11T21:53:59.000Z | [
"pytorch",
"electra",
"question-answering",
"en",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/electra-small-finetuned-squadv1 | 1 | null | transformers | 30,008 | ---
language: en
---
# Electra small ⚡ + SQuAD v1 ❓
[Electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) fine-tuned on [SQUAD v1.1 dataset](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task.
## Details of the downstream task (Q&A) - Model 🧠
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
## Details of the downstream task (Q&A) - Dataset 📚
**S**tanford **Q**uestion **A**nswering **D**ataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD v1.1 contains **100,000+** question-answer pairs on **500+** articles.
## Model training 🏋️
The model was trained on a Tesla P100 GPU and 25GB of RAM with the following command:
```bash
python transformers/examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path 'google/electra-small-discriminator' \
--do_eval \
--do_train \
--do_lower_case \
--train_file '/content/dataset/train-v1.1.json' \
--predict_file '/content/dataset/dev-v1.1.json' \
--per_gpu_train_batch_size 16 \
--learning_rate 3e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir '/content/output' \
--overwrite_output_dir \
--save_steps 1000
```
## Test set Results 🧾
| Metric | # Value |
| ------ | --------- |
| **EM** | **77.70** |
| **F1** | **85.74** |
| **Size**| **50 MB** |
Very good metrics for such a "small" model!
```json
{
'exact': 77.70104068117313,
'f1': 85.73991234187997,
'total': 10570,
'HasAns_exact': 77.70104068117313,
'HasAns_f1': 85.73991234187997,
'HasAns_total': 10570,
'best_exact': 77.70104068117313,
'best_exact_thresh': 0.0,
'best_f1': 85.73991234187997,
'best_f1_thresh': 0.0
}
```
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
QnA_pipeline = pipeline('question-answering', model='mrm8488/electra-small-finetuned-squadv1')
QnA_pipeline({
'context': 'A new strain of flu that has the potential to become a pandemic has been identified in China by scientists.',
'question': 'What has been discovered by scientists from China ?'
})
# Output:
{'answer': 'A new strain of flu', 'end': 19, 'score': 0.7950334108113424, 'start': 0}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/electrovid19-small | 75f963c638ccbebcef60eb92b1caa7860052cff5 | 2020-06-01T07:50:12.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | mrm8488 | null | mrm8488/electrovid19-small | 1 | null | transformers | 30,009 | Entry not found |
mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa | fb35737463863cf3b1d81122af062bfca37e5437 | 2020-06-13T10:57:01.000Z | [
"pytorch",
"masked_bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/prunebert-multi-uncased-finepruned-l0-reg-tydiqa-for-xqa | 1 | null | transformers | 30,010 | Entry not found |
mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa | 54b5eda03a9eebd16ccc2387f8930c64616653b1 | 2020-06-15T12:20:19.000Z | [
"pytorch",
"masked_bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/prunebert-multi-uncased-finepruned-topK-tydiqa-for-xqa | 1 | null | transformers | 30,011 | Entry not found |
mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa | 6d888587a13f784c8e1fa5d33a2bc5cf9d85f8e0 | 2020-06-02T12:12:53.000Z | [
"pytorch",
"masked_bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mrm8488 | null | mrm8488/prunebert-multi-uncased-finepruned-tydiqa-for-xqa | 1 | null | transformers | 30,012 | Entry not found |
mrm8488/t5-base-finetuned-quoref | c3aa691b2446e2f20b9b178370fbc297db1d9030 | 2020-11-04T19:59:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-quoref | 1 | null | transformers | 30,013 | Entry not found |
mrm8488/t5-base-finetuned-race | 4b90957d233e72f2d3dcb1f6f8bcaacbd62cea63 | 2020-11-07T02:18:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-finetuned-race | 1 | null | transformers | 30,014 | Entry not found |
mrm8488/t5-small-finetuned-squadv1 | d83792bec03360d76a31099c6dbf5fdb91ae6b64 | 2020-12-11T21:56:34.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:squad",
"arxiv:1910.10683",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-squadv1 | 1 | null | transformers | 30,015 | ---
language: en
datasets:
- squad
---
# T5-small fine-tuned on SQuAD
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [(small)](https://huggingface.co/t5-small) fine-tuned on [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
Dataset ID: ```squad``` from [Huggingface/NLP](https://github.com/huggingface/nlp)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| squad | train | 87599 |
| squad | valid | 10570 |
How to load it from [nlp](https://github.com/huggingface/nlp)
```python
train_dataset = nlp.load_dataset('squad, split=nlp.Split.TRAIN)
valid_dataset = nlp.load_dataset('squad', split=nlp.Split.VALIDATION)
```
Check out more about this dataset and others in [NLP Viewer](https://huggingface.co/nlp/viewer/)
## Model fine-tuning 🏋️
The training script is a slightly modified version of [this awesome one](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) by [Suraj Patil](https://twitter.com/psuraj28)
## Results 📝
| Metric | # Value |
| ------ | --------- |
| **EM** | **76.95** |
| **F1** | **85.71** |
## Model in Action 🚀
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-squadv1")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-squadv1")
def get_answer(question, context):
input_text = "question: %s context: %s </s>" % (question, context)
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'])
return tokenizer.decode(output[0])
context = "Manuel have created RuPERTa-base (a Spanish RoBERTa) with the support of HF-Transformers and Google"
question = "Who has supported Manuel?"
get_answer(question, context)
# output: 'HF-Transformers and Google'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/t5-small-finetuned-translation-es-to-pt | 04daf6a7619f78e4e3602bb209fba95390211a21 | 2020-08-04T16:39:37.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-translation-es-to-pt | 1 | null | transformers | 30,016 | Entry not found |
mrp/bert-finetuned-squad | 46ad6764ebb3d07ad79aa1dbb626126e856c81d0 | 2022-06-28T05:22:47.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | mrp | null | mrp/bert-finetuned-squad | 1 | null | transformers | 30,017 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Loss
type: loss
value: 1.073493242263794
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mtr0930/i-manual_integrated_tokenizer | bb5bb339c2a0d130d01003847129f8ef2319c5a1 | 2021-10-14T03:54:03.000Z | [
"pytorch",
"electra",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | mtr0930 | null | mtr0930/i-manual_integrated_tokenizer | 1 | null | transformers | 30,018 | Entry not found |
mujerry/bert-base-uncased-finetuned-QnA-v1 | 2fb2ee12d7a47e688a4b1e575619aa1c224b4a2f | 2021-10-26T09:19:02.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | mujerry | null | mujerry/bert-base-uncased-finetuned-QnA-v1 | 1 | null | transformers | 30,019 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-QnA-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-QnA-v1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 39 | 3.3668 |
| No log | 2.0 | 78 | 3.2134 |
| No log | 3.0 | 117 | 3.1685 |
| No log | 4.0 | 156 | 3.1042 |
| No log | 5.0 | 195 | 3.1136 |
| No log | 6.0 | 234 | 2.9051 |
| No log | 7.0 | 273 | 2.9077 |
| No log | 8.0 | 312 | 2.9774 |
| No log | 9.0 | 351 | 2.9321 |
| No log | 10.0 | 390 | 2.9501 |
| No log | 11.0 | 429 | 2.8544 |
| No log | 12.0 | 468 | 2.8761 |
| 3.0255 | 13.0 | 507 | 2.8152 |
| 3.0255 | 14.0 | 546 | 2.8046 |
| 3.0255 | 15.0 | 585 | 2.6979 |
| 3.0255 | 16.0 | 624 | 2.6379 |
| 3.0255 | 17.0 | 663 | 2.7091 |
| 3.0255 | 18.0 | 702 | 2.6914 |
| 3.0255 | 19.0 | 741 | 2.7403 |
| 3.0255 | 20.0 | 780 | 2.7479 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mussoguy/han-kogpt | a697d5014fcc406183f0aee4f516e1233822b7e5 | 2021-12-28T13:37:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mussoguy | null | mussoguy/han-kogpt | 1 | null | transformers | 30,020 | Entry not found |
naleraphael/rasr_base_zhtw | 876f9a61f0f8e2c5b524ec98526903c36d14ffaa | 2022-02-01T23:40:02.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | naleraphael | null | naleraphael/rasr_base_zhtw | 1 | null | transformers | 30,021 | Entry not found |
naleraphael/rasr_sample | 9b22df545f73dfd07db15719c7d75e5044f0280c | 2022-02-01T18:18:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sv-SE",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | naleraphael | null | naleraphael/rasr_sample | 1 | null | transformers | 30,022 | ---
language:
- sv-SE
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: rasr_sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rasr_sample
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3147
- Wer: 0.2676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3332 | 1.45 | 500 | 3.3031 | 1.0 |
| 2.9272 | 2.91 | 1000 | 2.9353 | 0.9970 |
| 2.0736 | 4.36 | 1500 | 1.1565 | 0.8714 |
| 1.7339 | 5.81 | 2000 | 0.7156 | 0.6688 |
| 1.5989 | 7.27 | 2500 | 0.5791 | 0.5519 |
| 1.4916 | 8.72 | 3000 | 0.5038 | 0.5169 |
| 1.4562 | 10.17 | 3500 | 0.4861 | 0.4805 |
| 1.3893 | 11.63 | 4000 | 0.4584 | 0.4761 |
| 1.3797 | 13.08 | 4500 | 0.4298 | 0.4686 |
| 1.3508 | 14.53 | 5000 | 0.4138 | 0.3744 |
| 1.3165 | 15.99 | 5500 | 0.4015 | 0.3578 |
| 1.281 | 17.44 | 6000 | 0.3883 | 0.3472 |
| 1.2682 | 18.89 | 6500 | 0.3904 | 0.3434 |
| 1.2477 | 20.35 | 7000 | 0.3726 | 0.3321 |
| 1.2364 | 21.8 | 7500 | 0.3685 | 0.3281 |
| 1.2041 | 23.26 | 8000 | 0.3597 | 0.3194 |
| 1.1901 | 24.71 | 8500 | 0.3542 | 0.3203 |
| 1.1903 | 26.16 | 9000 | 0.3500 | 0.3138 |
| 1.1677 | 27.61 | 9500 | 0.3458 | 0.3067 |
| 1.1718 | 29.07 | 10000 | 0.3595 | 0.3112 |
| 1.1562 | 30.52 | 10500 | 0.3433 | 0.3022 |
| 1.1392 | 31.97 | 11000 | 0.3440 | 0.2936 |
| 1.1258 | 33.43 | 11500 | 0.3396 | 0.2950 |
| 1.1067 | 34.88 | 12000 | 0.3379 | 0.2939 |
| 1.0953 | 36.34 | 12500 | 0.3370 | 0.2868 |
| 1.0835 | 37.79 | 13000 | 0.3317 | 0.2860 |
| 1.0772 | 39.24 | 13500 | 0.3302 | 0.2854 |
| 1.0853 | 40.7 | 14000 | 0.3265 | 0.2783 |
| 1.0689 | 42.15 | 14500 | 0.3306 | 0.2770 |
| 1.0394 | 43.6 | 15000 | 0.3233 | 0.2757 |
| 1.0581 | 45.06 | 15500 | 0.3199 | 0.2713 |
| 1.0362 | 46.51 | 16000 | 0.3154 | 0.2683 |
| 1.0406 | 47.96 | 16500 | 0.3176 | 0.2688 |
| 1.0082 | 49.42 | 17000 | 0.3149 | 0.2679 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
napoler/bart-chinese-4-768 | 000e0eaed5b6692d0b3feb1deee8aa0ac29ae2a6 | 2021-11-08T13:57:29.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | napoler | null | napoler/bart-chinese-4-768 | 1 | null | transformers | 30,023 | Entry not found |
narabzad/passage_reranker_large_bert | b3830411aa535ef491d1c04072ac4b639523c1a7 | 2020-08-16T23:35:58.000Z | [
"pytorch",
"transformers"
] | null | false | narabzad | null | narabzad/passage_reranker_large_bert | 1 | null | transformers | 30,024 | Entry not found |
nateraw/resnext101_32x8d | 6b0545618826c87ae25f3c004dbdeb5f849fa951 | 2021-04-13T10:12:21.000Z | [
"pytorch",
"resnet",
"transformers"
] | null | false | nateraw | null | nateraw/resnext101_32x8d | 1 | null | transformers | 30,025 | Entry not found |
nateraw/timm-resnet50 | c2acdcaf324d2b4766ef3b0d7d6a359882d060ba | 2021-09-01T05:24:59.000Z | [
"pytorch",
"transformers"
] | null | false | nateraw | null | nateraw/timm-resnet50 | 1 | null | transformers | 30,026 | Entry not found |
nates-test-org/cait_m48_448 | dc1644707428a3758321be7fb3747da0c5bdd3df | 2021-10-29T04:04:25.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_m48_448 | 1 | null | timm | 30,027 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_m48_448 |
nates-test-org/cait_xxs24_224 | b8406e79b6dec6b85536fe5c66f8110deb597187 | 2021-10-29T04:32:59.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_xxs24_224 | 1 | null | timm | 30,028 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_xxs24_224 |
nates-test-org/cait_xxs36_384 | 9a94261b58051babc8cb7bb24ecece7095817a3f | 2021-10-29T04:35:50.000Z | [
"pytorch",
"timm",
"image-classification"
] | image-classification | false | nates-test-org | null | nates-test-org/cait_xxs36_384 | 1 | null | timm | 30,029 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for cait_xxs36_384 |
natsuo/ja_rome | e6189c9d79b21da4f64259bfb7db244f0176d9e2 | 2021-07-08T08:14:18.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | natsuo | null | natsuo/ja_rome | 1 | null | transformers | 30,030 | Entry not found |
navid-rekabsaz/advbert_ranker_l2 | d1f8a569e86015143901aba880d8ca190f61c0da | 2021-06-04T17:00:02.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | navid-rekabsaz | null | navid-rekabsaz/advbert_ranker_l2 | 1 | null | transformers | 30,031 | ## Welcome |
ncduy/bert-base-cased-wikitext2 | e9bd1d739bdc09a4a765b44e4388703ff4af5838 | 2021-08-06T15:08:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | fill-mask | false | ncduy | null | ncduy/bert-base-cased-wikitext2 | 1 | null | transformers | 30,032 | ---
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-cased-wikitext2
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0916 | 1.0 | 2346 | 7.0492 |
| 6.9074 | 2.0 | 4692 | 6.8727 |
| 6.8588 | 3.0 | 7038 | 6.8914 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ncduy/distilbert-base-uncased-finetuned-imdb | 1f58040ec3e7220172a19f449f53650e2ae72d0c | 2021-12-06T07:11:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | ncduy | null | ncduy/distilbert-base-uncased-finetuned-imdb | 1 | null | transformers | 30,033 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ncduy/marian-finetuned-kde4-en-to-fr | 626712bbd3baee1acf094c15c1c016308da4c0fa | 2021-12-06T08:46:30.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | ncduy | null | ncduy/marian-finetuned-kde4-en-to-fr | 1 | null | transformers | 30,034 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.8691179414982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.8691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ncduy/opus-mt-en-ro-finetuned-en-to-ro | 262baea1b5bb423ef77662a96ec8e65b18aa69e8 | 2021-08-06T15:55:10.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
] | text2text-generation | false | ncduy | null | ncduy/opus-mt-en-ro-finetuned-en-to-ro | 1 | null | transformers | 30,035 | ---
tags:
- generated_from_trainer
datasets:
- wmt16
model_index:
- name: opus-mt-en-ro-finetuned-en-to-ro
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ro-en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 382 | 1.4067 | 27.6209 | 33.5648 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
ncduy/opus-mt-en-vi-own-finetuned-en-to-vi | bcaae13a7d65c06909fc4eb7e40828b57c779472 | 2022-01-11T09:21:10.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ncduy | null | ncduy/opus-mt-en-vi-own-finetuned-en-to-vi | 1 | null | transformers | 30,036 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-vi-own-finetuned-en-to-vi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-vi-own-finetuned-en-to-vi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4416
- Bleu: 2.1189
- Gen Len: 25.153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 6.2513 | 1.0 | 1563 | 6.0147 | 0.7038 | 29.165 |
| 5.7184 | 2.0 | 3126 | 5.5631 | 1.9803 | 23.915 |
| 5.5248 | 3.0 | 4689 | 5.4416 | 2.1189 | 25.153 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ncduy/xlm-roberta-base-squad2-distilled-finetuned-chaii | 0aee8a23665702c0c1b3e47f640c4398764dd833 | 2021-12-09T14:41:35.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | ncduy | null | ncduy/xlm-roberta-base-squad2-distilled-finetuned-chaii | 1 | null | transformers | 30,037 | Entry not found |
ncoop57/codeformer-code-java | f7e58829e2a2781a752eeb0d4f6c62bbde11b2d5 | 2021-06-07T02:35:04.000Z | [
"pytorch",
"transformers"
] | null | false | ncoop57 | null | ncoop57/codeformer-code-java | 1 | null | transformers | 30,038 | Entry not found |
nehamj/distilbert-base-uncased-finetuned-squad | 3f7a3195c8f36ad12b2b52544cd218e5a84f1b95 | 2021-12-26T04:39:21.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | nehamj | null | nehamj/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 30,039 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nepp1d0/Bert-pretrained-smilesBindingDB | c9dc2af167847af9f3510e018e512186e625248b | 2022-01-11T13:23:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nepp1d0 | null | nepp1d0/Bert-pretrained-smilesBindingDB | 1 | null | transformers | 30,040 | Entry not found |
newsha/PQuAD | b3826130c9ddb743ac7599f6626f1eae4e258a59 | 2022-01-06T19:04:26.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | newsha | null | newsha/PQuAD | 1 | null | transformers | 30,041 | Entry not found |
newsha/PQuAD_2 | 8e7121134d147bdc1d95730c7bc323838206323f | 2022-01-06T14:06:13.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | newsha | null | newsha/PQuAD_2 | 1 | null | transformers | 30,042 | Entry not found |
nfliu/roberta_s2orc_bpe_47k | 5fcad82761b089536b711bcded6b3f303923ba81 | 2021-12-08T22:11:18.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nfliu | null | nfliu/roberta_s2orc_bpe_47k | 1 | null | transformers | 30,043 | Entry not found |
nhrony/bert-final | faa497c7f5bb600542cd4904cec7c3146845b576 | 2022-01-16T19:05:41.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nhrony | null | nhrony/bert-final | 1 | null | transformers | 30,044 | Entry not found |
nickmuchi/kde4-marian-finetuned-en-fr | 4415c4bb47e882dc1258721e8d203fd0ee180854 | 2022-01-08T03:34:47.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"dataset:kde4",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | nickmuchi | null | nickmuchi/kde4-marian-finetuned-en-fr | 1 | null | transformers | 30,045 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: kde4-marian-finetuned-en-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.83986563041003
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kde4-marian-finetuned-en-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8555
- Bleu: 52.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
niclas/model_sv_working | 00b3e0a6089db717979e2ee744d46726bcdd76c5 | 2021-12-23T10:35:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | niclas | null | niclas/model_sv_working | 1 | null | transformers | 30,046 | Entry not found |
nielsr/deformable-detr-single-scale-dc5 | 207ff2205bb988280049aec50ada3be80d38f7d2 | 2022-02-01T13:24:48.000Z | [
"pytorch",
"deformable_detr",
"transformers"
] | null | false | nielsr | null | nielsr/deformable-detr-single-scale-dc5 | 1 | null | transformers | 30,047 | Entry not found |
nielsr/dino_vitb8 | 3913fd2db1c2cba8a3fddaa5092e399fc4aa9c59 | 2021-05-03T08:00:43.000Z | [
"pytorch",
"vit",
"feature-extraction",
"transformers"
] | feature-extraction | false | nielsr | null | nielsr/dino_vitb8 | 1 | null | transformers | 30,048 | Entry not found |
nielsr/enformer-preview | 13c7fcfb765f220309331f4d549f823fe61a04bf | 2022-02-23T22:03:51.000Z | [
"pytorch",
"enformer",
"transformers"
] | null | false | nielsr | null | nielsr/enformer-preview | 1 | null | transformers | 30,049 | Entry not found |
nielsr/luke-large | 6c3b1774a38ea41dc3f260d26d6c9f156384613c | 2021-02-18T15:04:30.000Z | [
"pytorch",
"luke",
"transformers"
] | null | false | nielsr | null | nielsr/luke-large | 1 | null | transformers | 30,050 | Entry not found |
nielsr/tapas-base | 1e052baf074d968576839c6b61959d8663c6b87e | 2020-12-11T11:12:17.000Z | [
"pytorch",
"tapas",
"feature-extraction",
"en",
"arxiv:2004.02349",
"arxiv:2010.00571",
"transformers",
"sequence-classification",
"license:apache-2.0"
] | feature-extraction | false | nielsr | null | nielsr/tapas-base | 1 | null | transformers | 30,051 | ---
language: en
tags:
- tapas
- sequence-classification
license: apache-2.0
---
# TAPAS base model
This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_inter_masklm_base_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training. It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is the one with absolute position embeddings:
- `revision="v1"`, which corresponds to `tapas_inter_masklm_base`
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding one or more classification heads on top of the pre-trained model, and then
jointly train these randomly initialized classification heads with the base model on a downstream task.
## Intended uses & limitations
You can use the raw model for getting hidden representatons about table-question pairs, but it's mostly intended to be fine-tuned on a downstream task such as question answering or sequence classification. See the [model hub](https://huggingface.co/models?filter=tapas) to look for fine-tuned versions on a task that interests you.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence [SEP] Flattened table [SEP]
```
### Pre-training
The model was pre-trained on 32 Cloud TPU v3 cores for 1,000,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, pre-training on MLM only takes around 3 days. Aditionally, the model has been further pre-trained on a second task (table entailment). See the original TAPAS [paper](https://www.aclweb.org/anthology/2020.acl-main.398/) and the [follow-up paper](https://www.aclweb.org/anthology/2020.findings-emnlp.27/) for more details.
The optimizer used is Adam with a learning rate of 5e-5, and a warmup
ratio of 0.01.
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
nielsr/tapex-large-finetuned-sqa | f3415a4d06011a0c17bdf859dbf7f43c841cec17 | 2022-01-13T14:41:16.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:msr_sqa",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:apache-2.0",
"autotrain_compatible"
] | table-question-answering | false | nielsr | null | nielsr/tapex-large-finetuned-sqa | 1 | null | transformers | 30,052 | ---
language: en
tags:
- tapex
- table-question-answering
license: apache-2.0
datasets:
- msr_sqa
inference: false
---
TAPEX-large model fine-tuned on SQA. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
To load it and run inference, you can do the following:
```
from transformers import BartTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-sqa")
model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-sqa")
# create table
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
table = pd.DataFrame.from_dict(data)
# turn into dict
table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]}
# turn into format TAPEX expects
# define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py
linearizer = IndexedRowTableLinearize()
linear_table = linearizer.process_table(table_dict)
# add question
question = "how many movies does George Clooney have?"
joint_input = question + " " + linear_table
# encode
encoding = tokenizer(joint_input, return_tensors="pt")
# forward pass
outputs = model.generate(**encoding)
# decode
tokenizer.batch_decode(outputs, skip_special_tokens=True)
``` |
nikhil6041/wav2vec2-large-xlsr-hindi-demo-colab | 96a53effc9af4522191408fa34d7f2a2a200feba | 2021-11-04T09:21:14.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nikhil6041 | null | nikhil6041/wav2vec2-large-xlsr-hindi-demo-colab | 1 | null | transformers | 30,053 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nikhil6041/wav2vec2-large-xlsr-tamil-commonvoice | b692d6d680b623d8a2efa828f1acd110412d1251 | 2021-11-07T11:46:12.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nikhil6041 | null | nikhil6041/wav2vec2-large-xlsr-tamil-commonvoice | 1 | null | transformers | 30,054 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-tamil-commonvoice
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-tamil-commonvoice
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6145
- Wer: 0.8512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.0478 | 1.05 | 100 | 3.3867 | 1.0 |
| 3.2522 | 2.11 | 200 | 3.2770 | 1.0 |
| 3.1689 | 3.16 | 300 | 3.1135 | 1.0039 |
| 2.9278 | 4.21 | 400 | 2.0485 | 1.3109 |
| 1.3592 | 5.26 | 500 | 0.8044 | 1.0988 |
| 0.7472 | 6.32 | 600 | 0.6571 | 0.9474 |
| 0.5842 | 7.37 | 700 | 0.6079 | 0.9477 |
| 0.4831 | 8.42 | 800 | 0.6083 | 0.9491 |
| 0.4259 | 9.47 | 900 | 0.5916 | 0.8973 |
| 0.3817 | 10.53 | 1000 | 0.6070 | 0.9147 |
| 0.338 | 11.58 | 1100 | 0.5873 | 0.8617 |
| 0.3123 | 12.63 | 1200 | 0.5983 | 0.8844 |
| 0.287 | 13.68 | 1300 | 0.6146 | 0.8988 |
| 0.2706 | 14.74 | 1400 | 0.6068 | 0.8754 |
| 0.2505 | 15.79 | 1500 | 0.5996 | 0.8638 |
| 0.2412 | 16.84 | 1600 | 0.6106 | 0.8481 |
| 0.2176 | 17.89 | 1700 | 0.6152 | 0.8520 |
| 0.2255 | 18.95 | 1800 | 0.6150 | 0.8540 |
| 0.216 | 20.0 | 1900 | 0.6145 | 0.8512 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
nikhilnagaraj/german_gpt_small | e005a2ee4acea84aeb4ecee507b2db88b1998eaa | 2021-05-23T10:49:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nikhilnagaraj | null | nikhilnagaraj/german_gpt_small | 1 | null | transformers | 30,055 | Entry not found |
nikitam/mbert-resp-en-de | 2221017bec6937a49fa07234c60a1e1cfdd4329f | 2021-10-25T20:28:17.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-resp-en-de | 1 | null | transformers | 30,056 | Entry not found |
nikitam/mbert-resp-en-zh | c5d89c2ffb6e79733f795f8b4040e77e7e34e50d | 2021-10-25T20:06:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-resp-en-zh | 1 | null | transformers | 30,057 | Entry not found |
nikitam/mbert-tlm-chat-en-de | ccec8d2117636bb72917c23c219731f90ec5d6ba | 2021-10-25T20:32:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-tlm-chat-en-de | 1 | null | transformers | 30,058 | Entry not found |
nikitam/mbert-tlm-chat-en-it | 572813d999ca2ceb4cb2c5332499d8a4fd4a34c8 | 2021-10-25T20:51:47.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-tlm-chat-en-it | 1 | null | transformers | 30,059 | Entry not found |
nikitam/mbert-tlm-sent-en-de | 95001bae2b795f5406bfc21ec7e612d5105e2e75 | 2021-11-13T15:14:26.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-tlm-sent-en-de | 1 | null | transformers | 30,060 | Entry not found |
nikitam/mbert-tlm-sent-en-it | 95451f272eb8b65677f094bbca616e48d80d94df | 2021-10-25T20:51:54.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-tlm-sent-en-it | 1 | null | transformers | 30,061 | Entry not found |
nikitam/mbert-tlm-sent-en-zh | 1a07c3514258ed13deaf90b67f675621c94e6b0b | 2021-10-25T20:12:32.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-tlm-sent-en-zh | 1 | null | transformers | 30,062 | Entry not found |
nikitam/mbert-xdm-en-de | 782b02ea36645ef3443751afd633cd7ee34d77f2 | 2021-10-25T20:12:19.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nikitam | null | nikitam/mbert-xdm-en-de | 1 | null | transformers | 30,063 | Entry not found |
nithinholla/wav2vec2-large-xlsr-53-dutch | ab884eedad4f5a4dae89f0c32a36112c3641be6b | 2021-03-28T10:48:00.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nithinholla | null | nithinholla/wav2vec2-large-xlsr-53-dutch | 1 | null | transformers | 30,064 | ---
language: nl
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Dutch XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice nl
type: common_voice
args: nl
metrics:
- name: Test WER
type: wer
value: 21.59
---
# Wav2Vec2-Large-XLSR-53-Dutch
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "nl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch")
model = Wav2Vec2ForCTC.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Dutch test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "nl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch")
model = Wav2Vec2ForCTC.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\'\�\(\)\&\–\—\=\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("´", "'").replace("’", "'")
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 21.59 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://github.com/Nithin-Holla/wav2vec2-sprint/blob/main/train_nl.sh). |
nkul/dbert-rda | 34d7525e9d12c72296bf227092a1ca3c16935427 | 2021-12-22T21:29:29.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | nkul | null | nkul/dbert-rda | 1 | null | transformers | 30,065 | Entry not found |
nlokam/Digibot | 7e530e1b9a3bbe5f54ce6a81e481956d4b446e33 | 2021-10-26T21:12:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nlokam | null | nlokam/Digibot | 1 | null | transformers | 30,066 | ---
tags:
- conversational
---
# Digimon DialoGPT Model |
nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2 | 6a11b3a5e6d231ad35e9568b29ac6a6a034553d5 | 2021-12-29T04:54:26.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"transformers",
"generated_from_keras_callback",
"dpr",
"license:apache-2.0",
"model-index"
] | feature-extraction | false | nlpconnect | null | nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2 | 1 | null | transformers | 30,067 | ---
tags:
- generated_from_keras_callback
- dpr
license: apache-2.0
model-index:
- name: dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2
This model(google/bert_uncased_L-2_H-128_A-2) was trained from scratch on training data: data.retriever.nq-adv-hn-train(facebookresearch/DPR).
It achieves the following results on the evaluation set:
## Evaluation data
evaluation dataset: facebook-dpr-dev-dataset from official DPR github
|model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
|---|---|---|---|---|---|---|---|---|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-dev dataset|6445|199795|60.53%|68.28%|76.07%|80.98%|91.45%|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-dev dataset|6445|199795|65.43%|71.99%|79.03%|83.24%|92.11%|
|*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-dev dataset|6445|199795|40.94%|49.27%|59.05%|66.00%|82.00%|
evaluation dataset: UKPLab/beir test data but we have used first 2lac passage only.
|model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
|---|---|---|---|---|---|---|---|---|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-test dataset|3452|200001|49.68%|59.06%|69.40%|75.75%|89.28%|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-test dataset|3452|200001|51.62%|61.09%|70.10%|76.07%|88.70%|
|*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-test dataset|3452|200001|32.93%|43.74%|56.95%|66.30%|83.92%|
Note: * means we have evaluated on same eval dataset.
### Usage (HuggingFace Transformers)
```python
passage_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2")
query_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2")
p_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2")
q_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2")
def get_title_text_combined(passage_dicts):
res = []
for p in passage_dicts:
res.append(tuple((p['title'], p['text'])))
return res
processed_passages = get_title_text_combined(passage_dicts)
def extracted_passage_embeddings(processed_passages, model_config):
passage_inputs = tokenizer.batch_encode_plus(
processed_passages,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=model_config.passage_max_seq_len,
return_token_type_ids=True
)
passage_embeddings = passage_encoder.predict([np.array(passage_inputs['input_ids']),
np.array(passage_inputs['attention_mask']),
np.array(passage_inputs['token_type_ids'])],
batch_size=512,
verbose=1)
return passage_embeddings
passage_embeddings = extracted_passage_embeddings(processed_passages, model_config)
def extracted_query_embeddings(queries, model_config):
query_inputs = tokenizer.batch_encode_plus(
queries,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=model_config.query_max_seq_len,
return_token_type_ids=True
)
query_embeddings = query_encoder.predict([np.array(query_inputs['input_ids']),
np.array(query_inputs['attention_mask']),
np.array(query_inputs['token_type_ids'])],
batch_size=512,
verbose=1)
return query_embeddings
query_embeddings = extracted_query_embeddings(queries, model_config)
```
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Tokenizers 0.10.3
|
nlpconnect/dpr-question_encoder_bert_uncased_L-2_H-128_A-2 | c8fc85066f32803c97d271050b315cffbe8990db | 2021-12-29T04:54:34.000Z | [
"pytorch",
"tf",
"bert",
"feature-extraction",
"transformers",
"generated_from_keras_callback",
"dpr",
"license:apache-2.0",
"model-index"
] | feature-extraction | false | nlpconnect | null | nlpconnect/dpr-question_encoder_bert_uncased_L-2_H-128_A-2 | 1 | null | transformers | 30,068 | ---
tags:
- generated_from_keras_callback
- dpr
license: apache-2.0
model-index:
- name: dpr-question_encoder_bert_uncased_L-2_H-128_A-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpr-question_encoder_bert_uncased_L-2_H-128_A-2
This model(google/bert_uncased_L-2_H-128_A-2) was trained from scratch on training data: data.retriever.nq-adv-hn-train(facebookresearch/DPR).
It achieves the following results on the evaluation set:
## Evaluation data
evaluation dataset: facebook-dpr-dev-dataset from official DPR github
|model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
|---|---|---|---|---|---|---|---|---|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-dev dataset|6445|199795|60.53%|68.28%|76.07%|80.98%|91.45%|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-dev dataset|6445|199795|65.43%|71.99%|79.03%|83.24%|92.11%|
|*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-dev dataset|6445|199795|40.94%|49.27%|59.05%|66.00%|82.00%|
evaluation dataset: UKPLab/beir test data but we have used first 2lac passage only.
|model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
|---|---|---|---|---|---|---|---|---|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-test dataset|3452|200001|49.68%|59.06%|69.40%|75.75%|89.28%|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-test dataset|3452|200001|51.62%|61.09%|70.10%|76.07%|88.70%|
|*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-test dataset|3452|200001|32.93%|43.74%|56.95%|66.30%|83.92%|
Note: * means we have evaluated on same eval dataset.
### Usage (HuggingFace Transformers)
```python
passage_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2")
query_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2")
p_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2")
q_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2")
def get_title_text_combined(passage_dicts):
res = []
for p in passage_dicts:
res.append(tuple((p['title'], p['text'])))
return res
processed_passages = get_title_text_combined(passage_dicts)
def extracted_passage_embeddings(processed_passages, model_config):
passage_inputs = tokenizer.batch_encode_plus(
processed_passages,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=model_config.passage_max_seq_len,
return_token_type_ids=True
)
passage_embeddings = passage_encoder.predict([np.array(passage_inputs['input_ids']),
np.array(passage_inputs['attention_mask']),
np.array(passage_inputs['token_type_ids'])],
batch_size=512,
verbose=1)
return passage_embeddings
passage_embeddings = extracted_passage_embeddings(processed_passages, model_config)
def extracted_query_embeddings(queries, model_config):
query_inputs = tokenizer.batch_encode_plus(
queries,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=model_config.query_max_seq_len,
return_token_type_ids=True
)
query_embeddings = query_encoder.predict([np.array(query_inputs['input_ids']),
np.array(query_inputs['attention_mask']),
np.array(query_inputs['token_type_ids'])],
batch_size=512,
verbose=1)
return query_embeddings
query_embeddings = extracted_query_embeddings(queries, model_config)
```
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Tokenizers 0.10.3 |
nlplab/Verdict_Recognizer_Final | 563dfa45e5f00382d51e5af3f75b544d7ad35e79 | 2021-11-25T06:11:16.000Z | [
"pytorch",
"albert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlplab | null | nlplab/Verdict_Recognizer_Final | 1 | null | transformers | 30,069 | Entry not found |
nlpunibo/bert | f8e21745a1c74c3c301a28dfca07465f9ca24b43 | 2021-05-20T02:00:27.000Z | [
"pytorch",
"jax",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/bert | 1 | null | transformers | 30,070 | Entry not found |
nlpunibo/classifier | 6e6f32f71ec16bd3ce46a3eb2128abc052102ef7 | 2021-03-19T14:24:16.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | nlpunibo | null | nlpunibo/classifier | 1 | null | transformers | 30,071 | Entry not found |
nlpunibo/distilbert_base_config3 | 91dcd04c3d9617e3f797d808546dd29753956dcf | 2021-02-19T14:40:29.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_base_config3 | 1 | null | transformers | 30,072 | Entry not found |
nlpunibo/distilbert_classifier | daae4824f64c9ade883ff53b49ff570b65c29d37 | 2021-02-20T09:01:51.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | nlpunibo | null | nlpunibo/distilbert_classifier | 1 | null | transformers | 30,073 | Entry not found |
nlpunibo/distilbert_config2 | d7c72dbe64008a09d6f7cad1b02abe75111724b5 | 2021-02-19T14:49:49.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_config2 | 1 | null | transformers | 30,074 | Entry not found |
nlpunibo/distilbert_config3 | 4e6a97d7a48aa68a618659f4d3e2123926d1375b | 2021-02-18T08:52:42.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | nlpunibo | null | nlpunibo/distilbert_config3 | 1 | null | transformers | 30,075 | Entry not found |
nntadotzip/bert-base-cased-IUChatbot-ontologyDts | 0bcf7cee970718f598cabd4535088fe203bdef3f | 2022-01-20T16:21:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | nntadotzip | null | nntadotzip/bert-base-cased-IUChatbot-ontologyDts | 1 | null | transformers | 30,076 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-IUChatbot-ontologyDts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-IUChatbot-ontologyDts
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 382 | 0.2686 |
| 0.3946 | 2.0 | 764 | 0.2535 |
| 0.2577 | 3.0 | 1146 | 0.2446 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
noelmathewisaac/inspirational-quotes-distilgpt2 | 89c5559cab2b7644617c3e9e50803bbf215504bc | 2021-06-19T11:01:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | noelmathewisaac | null | noelmathewisaac/inspirational-quotes-distilgpt2 | 1 | 1 | transformers | 30,077 | ## About
`Distilgpt2` model finetuned on a dataset of inspirational/motivational quotes taken from the [Quotes-500K](https://github.com/ShivaliGoel/Quotes-500K) dataset. The model can generate inspirational quotes, many of which sound quite realistic.
## Code for Training
The code for fine-tuning the model can be found in this repo: https://github.com/Quotify-Bot/model-training.
## Training Details
The model was fine-tuned for **50 epochs** on Google Colab's GPU using about **100,000 quotes** from the original dataset.
## Some Interesting Quotes
**Prompt**: Friendship is like
> Friendship is like a flower. when it blooms, it beautifies this world with its fragrance.
**Prompt**: Life is like
> Life is like travelling through time so stop being afraid of taking a chance and start appreciating where you are in life.
**Prompt**: Motivation
> Motivation will drive you to action, which in turn attracts inspiration from beyond.
**Prompt**: In the end
> In the end, it is necessary to discover your inner beauty and truth. |
norie4/DialoGPT-small-memoji | 9d78cb37f4d9481005aa51ee2132aabc5a4fd947 | 2022-02-01T02:58:57.000Z | [
"pytorch",
"conversational"
] | conversational | false | norie4 | null | norie4/DialoGPT-small-memoji | 1 | null | null | 30,078 | ---
tags:
- conversational
---
# mremoji DialoGPT Model |
nouamanetazi/cover-letter-distilgpt2 | ec600bb35798a0d5d1453d0aaf2502d3d91df850 | 2021-11-24T01:06:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | nouamanetazi | null | nouamanetazi/cover-letter-distilgpt2 | 1 | 1 | transformers | 30,079 | Entry not found |
nouamanetazi/cover-letter-t5-small | 9a17fd2a2b9e7bea479feff2db3ed15109d4db63 | 2021-11-27T13:21:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | nouamanetazi | null | nouamanetazi/cover-letter-t5-small | 1 | 1 | transformers | 30,080 | Entry not found |
novinsh/xlm-roberta-large-toxicomments-12k | 4c171c95da874aae552e5eb1113165f2399a8c44 | 2020-05-26T15:25:05.000Z | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | novinsh | null | novinsh/xlm-roberta-large-toxicomments-12k | 1 | null | transformers | 30,081 | Entry not found |
nsa-thatchai/test | 84d89729a932d5da98dee8dab12e55a4e52c8a9b | 2021-05-24T12:20:01.000Z | [
"pytorch",
"text-generation",
"transformers"
] | text-generation | false | nsa-thatchai | null | nsa-thatchai/test | 1 | null | transformers | 30,082 | Entry not found |
nthoangcute/vibert-base-cased | 398722e4b4136824444ab003ae4909fb73618452 | 2021-05-20T02:05:12.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
] | null | false | nthoangcute | null | nthoangcute/vibert-base-cased | 1 | null | transformers | 30,083 | Entry not found |
nvshubhsharma/wav2vec2-large-xlsr-hindi-colab | 3517a17739045cee3e85d22bccb4b8acc885fce5 | 2021-11-06T14:48:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | nvshubhsharma | null | nvshubhsharma/wav2vec2-large-xlsr-hindi-colab | 1 | null | transformers | 30,084 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
nwl/DialoGPT-small-enhypen | 6555bc28f28b7cd4da1a59c28690edd69c305ac8 | 2021-12-31T13:38:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | nwl | null | nwl/DialoGPT-small-enhypen | 1 | null | transformers | 30,085 | ---
tags:
- conversational
---
|
oakkas/Dialge-small-harrypotter-oguz | 4e305c2ba954ec15274b41899029f238e6ecaf9b | 2021-08-26T19:20:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | oakkas | null | oakkas/Dialge-small-harrypotter-oguz | 1 | null | transformers | 30,086 | ---
tags:
- conversational
---
# Harry Potter Dialogue GPT Oguz |
obiohagwu/Dialogpt-small-rick | 08c918a7675f97911b7a7dcdd3b7be305ad41110 | 2021-07-12T20:55:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | obiohagwu | null | obiohagwu/Dialogpt-small-rick | 1 | null | transformers | 30,087 | Entry not found |
obiohagwu/Dialogpt-small-rick01 | 7d0ba716127343df6379f01baa2514b3ac7d8acf | 2021-07-13T14:47:48.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | obiohagwu | null | obiohagwu/Dialogpt-small-rick01 | 1 | null | transformers | 30,088 | Entry not found |
obss/mt5-base-3task-highlight-combined3 | 3a56e222938da89efd7e9485456504f5d5b51e51 | 2021-12-03T23:04:33.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"tr",
"dataset:tquad1",
"dataset:tquad2",
"dataset:xquad",
"arxiv:2111.06476",
"transformers",
"question-generation",
"answer-extraction",
"question-answering",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | obss | null | obss/mt5-base-3task-highlight-combined3 | 1 | null | transformers | 30,089 | ---
language: tr
datasets:
- tquad1
- tquad2
- xquad
tags:
- text2text-generation
- question-generation
- answer-extraction
- question-answering
- text-generation
pipeline_tag: text2text-generation
widget:
- text: "generate question: Legendary Entertainment, 2016 yılında bilimkurgu romanı Dune'un <hl> film ve TV haklarını <hl> satın aldı. Geliştirme kısa bir süre sonra başladı. Villeneuve projeye olan ilgisini dile getirdi ve resmi olarak yönetmen olarak imza attı. Roth ve Spaihts ile birlikte çalışarak senaryoyu iki bölüme ayırdı ve 1965 romanının 21. yüzyıla güncellenmiş bir uyarlamasını ekledi."
example_title: "Question Generation (Movie)"
- text: "generate question: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile <hl> Türkçe Soru Üretme / Soru Cevaplama <hl> konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
example_title: "Question Generation (Open Domain)"
- text: "generate question: Cenevizlilerin önemli üslerinden <hl> Amasra’yı <hl> aldı. 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi."
example_title: "Question Generation (History)"
- text: "extract answers: Cenevizlilerin önemli üslerinden Amasra’yı aldı. <hl> 1479’da bir antlaşma yaparak Venedik'le 16 yıllık savaşa sona verdi. <hl>"
example_title: "Answer Extraction (History)"
- text: "question: Bu model ne ise yarar? context: Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme / Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir. Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir."
example_title: "Question Answering (Open Domain)"
license: cc-by-4.0
---
# mt5-base for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-combined3')
```
## Citation 📜
```
@article{akyon2021automated,
title={Automated question generation and question answering from Turkish texts using text-to-text transformers},
author={Akyon, Fatih Cagatay and Cavusoglu, Devrim and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
journal={arXiv preprint arXiv:2111.06476},
year={2021}
}
```
## Overview ✔️
**Language model:** mt5-base
**Language:** Turkish
**Downstream-task:** Extractive QA/QG, Answer Extraction
**Training data:** TQuADv2-train, TQuADv2-val, XQuAD.tr
**Code:** https://github.com/obss/turkish-question-generation
**Paper:** https://arxiv.org/abs/2111.06476
## Hyperparameters
```
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-base"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "highlight"
```
## Performance
Refer to [paper](https://arxiv.org/abs/2111.06476).
## Usage 🔥
```python
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-base-3task-highlight-combined3')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
``` |
oda/music5 | a7a146d12be636a844669ae324e0f6d8725fe3ac | 2020-02-14T04:01:03.000Z | [
"pytorch",
"transformers"
] | null | false | oda | null | oda/music5 | 1 | null | transformers | 30,090 | Entry not found |
odinmay/zackbotai | 4bc1965f8f38fac6ee364c337514d5841950e8c7 | 2021-06-03T21:41:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | odinmay | null | odinmay/zackbotai | 1 | null | transformers | 30,091 | Entry not found |
ojasaar/distilbert-sentence-msmarco-en-et | eae94400c0c0ff3c398de80d41ef72500026ad8a | 2020-11-05T15:09:04.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | false | ojasaar | null | ojasaar/distilbert-sentence-msmarco-en-et | 1 | null | transformers | 30,092 | Entry not found |
omkar1309/RickBot | ff0eaa4641134e3e134fdd45891b7b428b761142 | 2021-06-07T13:09:24.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | omkar1309 | null | omkar1309/RickBot | 1 | null | transformers | 30,093 | ---
tags:
- conversational
---
#My Awesome model |
oo/distilbert-base-uncased-finetuned-squad | a4e16c7c9b48aab3e83a4ac001dbc8cd00d68c8a | 2021-12-08T18:56:24.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | question-answering | false | oo | null | oo/distilbert-base-uncased-finetuned-squad | 1 | null | transformers | 30,094 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
oododo/DialoGPT-small-elon | 2ee4b1816571aee1afd53eb628c48b07d6445d90 | 2021-12-06T21:53:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | oododo | null | oododo/DialoGPT-small-elon | 1 | null | transformers | 30,095 | ---
tags:
- conversational
---
# Elon Musk DialogGPT Model |
orri/XLMR-ENIS-finetuned-ner | e27e9a5611a7af96c81397f02f92c4ca71c7f31f | 2021-10-01T16:14:57.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:agpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | orri | null | orri/XLMR-ENIS-finetuned-ner | 1 | null | transformers | 30,096 | ---
license: agpl-3.0
tags:
- generated_from_trainer
datasets:
- mim_gold_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: XLMR-ENIS-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: mim_gold_ner
type: mim_gold_ner
args: mim-gold-ner
metrics:
- name: Precision
type: precision
value: 0.8714268909540054
- name: Recall
type: recall
value: 0.842296759522456
- name: F1
type: f1
value: 0.8566142460684552
- name: Accuracy
type: accuracy
value: 0.9827189115812273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLMR-ENIS-finetuned-ner
This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Precision: 0.8714
- Recall: 0.8423
- F1: 0.8566
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0561 | 1.0 | 2904 | 0.0939 | 0.8481 | 0.8205 | 0.8341 | 0.9804 |
| 0.031 | 2.0 | 5808 | 0.0917 | 0.8652 | 0.8299 | 0.8472 | 0.9819 |
| 0.0186 | 3.0 | 8712 | 0.0955 | 0.8714 | 0.8423 | 0.8566 | 0.9827 |
### Framework versions
- Transformers 4.11.1
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
osamajandali/dronies-stewart | 6e85c0e522b754b522530c816d51c74d242b4cbf | 2021-12-31T14:45:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | osamajandali | null | osamajandali/dronies-stewart | 1 | null | transformers | 30,097 | ---
tags:
- conversational
---
# Dronies Stewart Model |
osanseviero/clip-st | c6d3f44330bb3d082a97bc3cb2b3e7ff9acdeb84 | 2021-05-17T08:59:53.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers"
] | feature-extraction | false | osanseviero | null | osanseviero/clip-st | 1 | null | sentence-transformers | 30,098 | ---
tags:
- sentence-transformers
- feature-extraction
---
# TODO: Name of Model
TODO: Description
## Model Description
TODO: Add relevant content
(0) Base Transformer Type: DistilBertModel
(1) Pooling mean
(2) Dense 768x512
## Usage (Sentence-Transformers)
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence"]
model = SentenceTransformer(TODO)
embeddings = model.encode(sentences)
print(embeddings)
```
## TODO: Training Procedure
## TODO: Evaluation Results
## TODO: Citing & Authors
|
osanseviero/dummy-model-test | 01a35f92c70ade88b85008d16594199203c993bb | 2021-07-05T16:23:56.000Z | [
"pytorch",
"camembert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | osanseviero | null | osanseviero/dummy-model-test | 1 | null | transformers | 30,099 | Entry not found |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.