modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
qingtan007/bert_cn_finetuning | 36d7381f81b8c169e14f2d7e1edf3e102b88874a | 2021-05-20T03:49:01.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | qingtan007 | null | qingtan007/bert_cn_finetuning | 7 | null | transformers | 14,200 | Entry not found |
ramsrigouthamg/t5_squad | 197f2d94108a2aaa7a81c45b55afbc9ca804a980 | 2020-07-01T15:37:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | ramsrigouthamg | null | ramsrigouthamg/t5_squad | 7 | 1 | transformers | 14,201 | Entry not found |
rotendahl/cold-bert-base-pre-norm | 4677da6b69a9ee29da68896705009d91211266ef | 2021-11-04T22:00:30.000Z | [
"pytorch",
"bert",
"text-generation",
"transformers"
]
| text-generation | false | rotendahl | null | rotendahl/cold-bert-base-pre-norm | 7 | null | transformers | 14,202 | Entry not found |
saattrupdan/wav2vec2-xls-r-300m-cv8-da | 547a9717c50b751fbf4f05990f7000aecf84ae2b | 2022-03-21T17:29:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:common_voice_8_0",
"transformers",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | saattrupdan | null | saattrupdan/wav2vec2-xls-r-300m-cv8-da | 7 | null | transformers | 14,203 | ---
language:
- da
license: apache-2.0
tasks:
- automatic-speech-recognition
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-cv8-da
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_8_0
args: da
name: Danish Common Voice 8.0
metrics:
- type: wer
value: 26.45
- task:
type: automatic-speech-recognition
dataset:
type: Alvenir/alvenir_asr_da_eval
name: Alvenir ASR test dataset
metrics:
- type: wer
value: 25.80
---
# XLS-R-300m-CV8-da
## Model description
This model is a fine-tuned version of the multilingual acoustic model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Danish part of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), containing ~6 crowdsourced hours of read-aloud Danish speech.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 31.33 | 26.45 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 30.54 | 25.80 | |
samitizerxu/wav2vec2-xls-r-300m-lg | 5249ba66978b8cd7bbe36ce83b4a0f3b03e0bc47 | 2022-03-24T11:56:05.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"lg",
"dataset:common_voice",
"transformers",
"robust-speech-event",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | samitizerxu | null | samitizerxu/wav2vec2-xls-r-300m-lg | 7 | null | transformers | 14,204 | ---
language:
- lg
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- common_voice
- lg
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-lg
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 78.89
- name: Test CER
type: cer
value: 15.16
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-lg
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - LG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6989
- Wer: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9089 | 6.33 | 500 | 2.8983 | 1.0002 |
| 2.5754 | 12.66 | 1000 | 1.8710 | 1.0 |
| 1.4093 | 18.99 | 1500 | 0.7195 | 0.8547 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id samitizerxu/wav2vec2-xls-r-300m-lg --dataset mozilla-foundation/common_voice_7_0 --config lg --split test
```
|
saurkulsh/T0pp | 71a28d4b9988bf76317c27a7c0368462ec1bedbb | 2022-01-06T05:48:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2110.08207",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | saurkulsh | null | saurkulsh/T0pp | 7 | null | transformers | 14,205 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
**How do I pronounce the name of the model?** T0 should be pronounced "T Zero" (like in "T5 for zero-shot") and any "p" stands for "Plus", so "T0pp" should be pronounced "T Zero Plus Plus"!
# Model Description
T0* shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller. It is a series of encoder-decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine-tune a pretrained language model on this multitask mixture covering many different NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For instance, you can ask *"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"*, and the model will hopefully generate *"Positive"*.
A few other examples that you can try:
- *A is the son's of B's uncle. What is the family relationship between A and B?*
- *Question A: How is air traffic controlled?<br>
Question B: How do you become an air traffic controller?<br>
Pick one: these questions are duplicates or not duplicates.*
- *Is the word 'table' used in the same meaning in the two following sentences?<br><br>
Sentence A: you can leave the books on the table over there.<br>
Sentence B: the tables in this book are very hard to read.*
- *Max: Know any good websites to buy clothes from?<br>
Payton: Sure :) LINK 1, LINK 2, LINK 3<br>
Max: That's a lot of them!<br>
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.<br>
Max: I'll check them out. Thanks.<br><br>
Who or what are Payton and Max referring to when they say 'them'?*
- *On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.<br>
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.<br><br>
Which book is the leftmost book?*
- *Reorder the words in this sentence: justin and name bieber years is my am I 27 old.*
# How to use
We make available the models presented in our [paper](https://arxiv.org/abs/2110.08207) along with the ablation models. We recommend using the [T0pp](https://huggingface.co/bigscience/T0pp) (pronounce "T Zero Plus Plus") checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[T0](https://huggingface.co/bigscience/T0)|11 billion|
|[T0p](https://huggingface.co/bigscience/T0p)|11 billion|
|[T0pp](https://huggingface.co/bigscience/T0pp)|11 billion|
|[T0_single_prompt](https://huggingface.co/bigscience/T0_single_prompt)|11 billion|
|[T0_original_task_only](https://huggingface.co/bigscience/T0_original_task_only)|11 billion|
|[T0_3B](https://huggingface.co/bigscience/T0_3B)|3 billion|
Here is how to use the model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
If you want to use another checkpoint, please replace the path in `AutoTokenizer` and `AutoModelForSeq2SeqLM`.
# Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5-v1_1-large), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). We use the publicly available [language model-adapted T5 checkpoints](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine-tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input. We detail our training data in the next section.
Training details:
- Fine-tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e-3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/`num_templates` examples)
- Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|T0|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop<br>- Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES<br>- Closed-Book QA: Hotpot QA*, Wiki QA<br>- Structure-To-Text: Common Gen, Wiki Bio<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum<br>- Topic Classification: AG News, DBPedia, TREC<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|T0p|Same as T0 with additional datasets from GPT-3's evaluation suite:<br>- Multiple-Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag<br>- Extractive QA: SQuAD v2<br>- Closed-Book QA: Trivia QA, Web Questions|
|T0pp|Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets):<br>- BoolQ<br>- COPA<br>- MultiRC<br>- ReCoRD<br>- WiC<br>- WSC|
|T0_single_prompt|Same as T0 but only one prompt per training dataset|
|T0_original_task_only|Same as T0 but only original tasks templates|
|T0_3B|Same as T0 but starting from a T5-LM XL (3B parameters) pre-trained model|
For reproducibility, we release the data we used for training (and evaluation) in the [P3 dataset](https://huggingface.co/datasets/bigscience/P3). Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed-book QA due to long input sequence length.
# Evaluation data
We evaluate our models on a suite of held-out tasks:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI, CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
We also evaluate T0, T0p and T0pp on the a subset of the [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Limitations
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non-trivial computational resources. When using multiple GPUs, it is possible to use [.parallelize()](https://huggingface.co/transformers/parallelism.html).
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non English text.
# Bias and fairness
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine-tuning, the models trained are not bias-free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist or biased:
- Input: `Is the earth flat?` - Prediction: `yes`
- Input: `Do vaccines cause autism?` - Prediction: `yes`
- Input: `Complete this sentence: This man works as a` - Prediction: `Architect`
- Input: `Complete this sentence: This woman works as a` - Prediction: `Nanny`
Language models can reproduce undesirable social biases represented in the large corpus they are pre-trained on. We evaluate our models in two ways: first in their ability to recognize or label gender biases and second in the extent to which they reproduce those biases.
To measure the ability of our model to recognize gender biases, we evaluate our models using the WinoGender Schemas (also called AX-g under SuperGLUE) and CrowS-Pairs. WinoGender Schemas are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias. We use the *Diverse Natural Language Inference Collection* ([Poliak et al., 2018](https://aclanthology.org/D18-1007/)) version that casts WinoGender as a textual entailment task and report accuracy. CrowS-Pairs is a challenge dataset for measuring the degree to which U.S. stereotypical biases present in the masked language models using minimal pairs of sentences. We re-formulate the task by predicting which of two sentences is stereotypical (or anti-stereotypical) and report accuracy. For each dataset, we evaluate between 5 and 10 prompts.
<table>
<tr>
<td>Dataset</td>
<td>Model</td>
<td>Average (Acc.)</td>
<td>Median (Acc.)</td>
</tr>
<tr>
<td rowspan="10">CrowS-Pairs</td><td>T0</td><td>59.2</td><td>83.8</td>
</tr>
<td>T0p</td><td>57.6</td><td>83.8</td>
<tr>
</tr>
<td>T0pp</td><td>62.7</td><td>64.4</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>57.6</td><td>69.5</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>47.1</td><td>37.8</td>
<tr>
</tr>
<td>T0_3B</td><td>56.9</td><td>82.6</td>
</tr>
<tr>
<td rowspan="10">WinoGender</td><td>T0</td><td>84.2</td><td>84.3</td>
</tr>
<td>T0p</td><td>80.1</td><td>80.6</td>
<tr>
</tr>
<td>T0pp</td><td>89.2</td><td>90.0</td>
<tr>
</tr>
<td>T0_single_prompt</td><td>81.6</td><td>84.6</td>
<tr>
</tr>
<td>T0_original_task_only</td><td>83.7</td><td>83.8</td>
<tr>
</tr>
<td>T0_3B</td><td>69.7</td><td>69.4</td>
</tr>
</table>
To measure the extent to which our model reproduces gender biases, we evaluate our models using the WinoBias Schemas. WinoBias Schemas are pronoun coreference resolution tasks that have the potential to be influenced by gender bias. WinoBias Schemas has two schemas (type1 and type2) which are partitioned into pro-stereotype and anti-stereotype subsets. A "pro-stereotype" example is one where the correct answer conforms to stereotypes, while an "anti-stereotype" example is one where it opposes stereotypes. All examples have an unambiguously correct answer, and so the difference in scores between the "pro-" and "anti-" subset measures the extent to which stereotypes can lead the model astray. We report accuracies by considering a prediction correct if the target noun is present in the model's prediction. We evaluate on 6 prompts.
<table>
<tr>
<td rowspan="2">Model</td>
<td rowspan="2">Subset</td>
<td colspan="3">Average (Acc.)</td>
<td colspan="3">Median (Acc.)</td>
</tr>
<tr>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
<td>Pro</td>
<td>Anti</td>
<td>Pro - Anti</td>
</tr>
<tr>
<td rowspan="2">T0</td><td>Type 1</td>
<td>68.0</td><td>61.9</td><td>6.0</td><td>71.7</td><td>61.9</td><td>9.8</td>
</tr>
<td>Type 2</td>
<td>79.3</td><td>76.4</td><td>2.8</td><td>79.3</td><td>75.0</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0p</td>
<td>Type 1</td>
<td>66.6</td><td>57.2</td><td>9.4</td><td>71.5</td><td>62.6</td><td>8.8</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>73.4</td><td>4.3</td><td>86.1</td><td>81.3</td><td>4.8</td>
</tr>
</tr>
<td rowspan="2">T0pp</td>
<td>Type 1</td>
<td>63.8</td><td>55.9</td><td>7.9</td><td>72.7</td><td>63.4</td><td>9.3</td>
</tr>
</tr>
<td>Type 2</td>
<td>66.8</td><td>63.0</td><td>3.9</td><td>79.3</td><td>74.0</td><td>5.3</td>
</tr>
</tr>
<td rowspan="2">T0_single_prompt</td>
<td>Type 1</td>
<td>73.7</td><td>60.5</td><td>13.2</td><td>79.3</td><td>60.6</td><td>18.7</td>
</tr>
</tr>
<td>Type 2</td>
<td>77.7</td><td>69.6</td><td>8.0</td><td>80.8</td><td>69.7</td><td>11.1</td>
</tr>
</tr>
<td rowspan="2">T0_original_task_only</td>
<td>Type 1</td>
<td>78.1</td><td>67.7</td><td>10.4</td><td>81.8</td><td>67.2</td><td>14.6</td>
</tr>
</tr>
<td> Type 2</td>
<td>85.2</td><td>82.3</td><td>2.9</td><td>89.6</td><td>85.4</td><td>4.3</td>
</tr>
</tr>
<td rowspan="2">T0_3B</td>
<td>Type 1</td>
<td>82.3</td><td>70.1</td><td>12.2</td><td>83.6</td><td>62.9</td><td>20.7</td>
</tr>
</tr>
<td> Type 2</td>
<td>83.8</td><td>76.5</td><td>7.3</td><td>85.9</td><td>75</td><td>10.9</td>
</tr>
</table>
# BibTeX entry and citation info
```bibtex
@misc{sanh2021multitask,
title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Stella Biderman and Leo Gao and Tali Bers and Thomas Wolf and Alexander M. Rush},
year={2021},
eprint={2110.08207},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
sentence-transformers/nli-bert-base-max-pooling | af35f05f41be3efd5233dfb9a880656f5794681f | 2021-08-05T08:27:21.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-bert-base-max-pooling | 7 | null | sentence-transformers | 14,206 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-bert-base-max-pooling
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-bert-base-max-pooling')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Max Pooling - Take the max value over time for every dimension.
def max_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
return torch.max(token_embeddings, 1)[0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-bert-base-max-pooling')
model = AutoModel.from_pretrained('sentence-transformers/nli-bert-base-max-pooling')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-bert-base-max-pooling)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
shoubhik/wav2vec2-xls-r-300m-hindi-lm | ceb28aac6ab74f196214d63c305c77ce484bde6d | 2022-02-10T06:24:19.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | shoubhik | null | shoubhik/wav2vec2-xls-r-300m-hindi-lm | 7 | null | transformers | 14,207 | wav2vec2-xls-r-300m-hindi-lm
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the 'Openslr Multilingual and code-switching ASR challenge' dataset and 'mozilla-foundation/common_voice_7_0' dataset. It achieves the following results on the evaluation set:
With language model:
WER: 0.3421149821494522
CER: 0.12281403517543969
With out language model:
WER: 0.4642989043456851
CER: 0.15765197064963313
- robust-speech-event |
sismetanin/rubert-ru-sentiment-sentirueval2016 | 66e205b31e0e8b67600e2a0c25cbbca2b1ff556e | 2021-05-20T06:14:17.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | sismetanin | null | sismetanin/rubert-ru-sentiment-sentirueval2016 | 7 | null | transformers | 14,208 | Entry not found |
sismetanin/sbert-ru-sentiment-sentirueval2016 | 27da8f4627c5a74ed7801adc0e8c77837d490a9d | 2021-05-20T06:45:53.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | sismetanin | null | sismetanin/sbert-ru-sentiment-sentirueval2016 | 7 | null | transformers | 14,209 | Entry not found |
socialmediaie/TRAC2020_ALL_A_bert-base-multilingual-uncased | 8d3f70da2c92a6c5e332468636d1d795c0920848 | 2021-05-20T06:52:09.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_ALL_A_bert-base-multilingual-uncased | 7 | null | transformers | 14,210 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_ENG_B_bert-base-uncased | 59bcde08281104a8141d25d87e46a10388789cdf | 2021-05-20T06:56:37.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_ENG_B_bert-base-uncased | 7 | null | transformers | 14,211 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
socialmediaie/TRAC2020_IBEN_A_bert-base-multilingual-uncased | 945a66b9309db57476f4acbb91e5b62e101791ec | 2021-05-20T07:03:18.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | socialmediaie | null | socialmediaie/TRAC2020_IBEN_A_bert-base-multilingual-uncased | 7 | null | transformers | 14,212 | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
spasis/bert-finetuned-ner-accelerate | 4c714baae342bae1b82eaf2f088b53bd46629fc4 | 2022-02-23T01:43:24.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | spasis | null | spasis/bert-finetuned-ner-accelerate | 7 | null | transformers | 14,213 | Entry not found |
springml111/Pegasus_Paraphrase_model | 1559481a2a62e1c43fbcb6cb3d8789424550f021 | 2021-12-01T13:56:15.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | springml111 | null | springml111/Pegasus_Paraphrase_model | 7 | null | transformers | 14,214 | Entry not found |
srush/bert_uncased_L-2_H-128_A-2 | 56dc752de1d9f04222866dc6a4e61662a61e41bb | 2021-05-20T07:12:06.000Z | [
"pytorch",
"jax",
"bert",
"transformers"
]
| null | false | srush | null | srush/bert_uncased_L-2_H-128_A-2 | 7 | null | transformers | 14,215 | Entry not found |
sshleifer/distill-mbart-en-ro-12-4 | f7ea9a9c3bf5da6b4db7c74b49dba6ad2e12bcbd | 2020-09-10T15:56:16.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sshleifer | null | sshleifer/distill-mbart-en-ro-12-4 | 7 | null | transformers | 14,216 | Entry not found |
sshleifer/student_xsum_12_3 | bdfab7a48f1400700e1948de7454f1dac659e96f | 2021-06-14T09:46:21.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | sshleifer | null | sshleifer/student_xsum_12_3 | 7 | null | transformers | 14,217 | Entry not found |
superb/hubert-large-superb-ks | bc13861e0b60c4f9eec276eb6c8366b0ac49e52d | 2021-11-04T16:03:31.000Z | [
"pytorch",
"hubert",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
]
| audio-classification | false | superb | null | superb/hubert-large-superb-ks | 7 | null | transformers | 14,218 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
- audio-classification
license: apache-2.0
widget:
- example_title: Speech Commands "down"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_down.wav
- example_title: Speech Commands "go"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_go.wav
---
# Hubert-Large for Keyword Spotting
## Model description
This is a ported version of
[S3PRL's Hubert for the SUPERB Keyword Spotting task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/speech_commands).
The base model is [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
inference time are all crucial. SUPERB uses the widely used
[Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task.
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
false positive.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ks-keyword-spotting).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-large-superb-ks")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
from torchaudio.sox_effects import apply_effects_file
effects = [["channels", "1"], ["rate", "16000"], ["gain", "-3.0"]]
def map_to_array(example):
speech, _ = apply_effects_file(example["file"], effects)
example["speech"] = speech.squeeze(0).numpy()
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ks", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-large-superb-ks")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-large-superb-ks")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9529` | `0.9532` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
tanay/layoutlm-fine-tuned | 0d3a0f2fec887f0a7666075c07708ba8bba3fba2 | 2021-07-02T03:19:32.000Z | [
"pytorch",
"layoutlm",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tanay | null | tanay/layoutlm-fine-tuned | 7 | null | transformers | 14,219 | Entry not found |
tanyagoyal/paraphrase-sow | 8f883e424b4197dbbe4370ab2ca8f21287cc5595 | 2021-08-31T22:51:47.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | tanyagoyal | null | tanyagoyal/paraphrase-sow | 7 | null | transformers | 14,220 | Entry not found |
tartuNLP/EstBERT_Morph_128 | fec00e6c003061341c3a9e0b46092820202fe42b | 2022-05-03T07:49:21.000Z | [
"pytorch",
"bert",
"token-classification",
"et",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
]
| token-classification | false | tartuNLP | null | tartuNLP/EstBERT_Morph_128 | 7 | null | transformers | 14,221 | ---
language: et
license: cc-by-4.0
--- |
techthiyanes/Bert_Bahasa_Sentiment | ac80aa8bdf259ed33d1812f4367c385325b3d89e | 2021-05-20T07:26:52.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | techthiyanes | null | techthiyanes/Bert_Bahasa_Sentiment | 7 | null | transformers | 14,222 | from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained('techthiyanes/Bert_Bahasa_Sentiment')
inputs = tokenizer("saya tidak", return_tensors="pt")
labels = torch.tensor([1]).unsqueeze(0)
outputs = model(**inputs, labels=labels)
loss = outputs.loss
logits = outputs.logits
outputs
hello
|
thyagosme/bert-base-uncased-finetuned-swag | c88dcf5b48d08face4e58a44910cb163e9f1828c | 2022-02-12T02:13:46.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:swag",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| multiple-choice | false | thyagosme | null | thyagosme/bert-base-uncased-finetuned-swag | 7 | null | transformers | 14,223 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0438
- Accuracy: 0.7915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7708 | 1.0 | 4597 | 0.6025 | 0.7659 |
| 0.4015 | 2.0 | 9194 | 0.6287 | 0.7841 |
| 0.1501 | 3.0 | 13791 | 1.0438 | 0.7915 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
tlemberger/sd-ner | 6293b6a5b7581aef549f5b169e0ae75697f7ac58 | 2021-05-20T22:31:05.000Z | [
"pytorch",
"jax",
"roberta",
"token-classification",
"english",
"dataset:EMBO/sd-panels",
"transformers",
"token classification",
"autotrain_compatible"
]
| token-classification | false | tlemberger | null | tlemberger/sd-ner | 7 | null | transformers | 14,224 | ---
language:
- english
thumbnail:
tags:
- token classification
license:
datasets:
- EMBO/sd-panels
metrics:
-
---
# sd-ner
## Model description
This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang) and fine-tuned for token classification on the SourceData [sd-panels](https://huggingface.co/datasets/EMBO/sd-panels) dataset to perform Named Entity Recognition of bioentities.
## Intended uses & limitations
#### How to use
The intended use of this model is for Named Entity Recognition of biological entitie used in SourceData annotations (https://sourcedata.embo.org), including small molecules, gene products (genes and proteins), subcellular components, cell line and cell types, organ and tissues, species as well as experimental methods.
To have a quick check of the model:
```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """<s> F. Western blot of input and eluates of Upf1 domains purification in a Nmd4-HA strain. The band with the # might corresponds to a dimer of Upf1-CH, bands marked with a star correspond to residual signal with the anti-HA antibodies (Nmd4). Fragments in the eluate have a smaller size because the protein A part of the tag was removed by digestion with the TEV protease. G6PDH served as a loading control in the input samples </s>"""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-ner')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res:
print(r['word'], r['entity'])
```
#### Limitations and bias
The model must be used with the `roberta-base` tokenizer.
## Training data
The model was trained for token classification using the [EMBO/sd-panels dataset](https://huggingface.co/datasets/EMBO/biolang) wich includes manually annotated examples.
## Training procedure
The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.
Training code is available at https://github.com/source-data/soda-roberta
- Command: `python -m tokcl.train /data/json/sd_panels NER --num_train_epochs=3.5`
- Tokenizer vocab size: 50265
- Training data: EMBO/biolang MLM
- Training with 31410 examples.
- Evaluating on 8861 examples.
- Training on 15 features: O, I-SMALL_MOLECULE, B-SMALL_MOLECULE, I-GENEPROD, B-GENEPROD, I-SUBCELLULAR, B-SUBCELLULAR, I-CELL, B-CELL, I-TISSUE, B-TISSUE, I-ORGANISM, B-ORGANISM, I-EXP_ASSAY, B-EXP_ASSAY
- Epochs: 3.5
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
## Eval results
On test set with `sklearn.metrics`:
```
precision recall f1-score support
CELL 0.77 0.81 0.79 3477
EXP_ASSAY 0.71 0.70 0.71 7049
GENEPROD 0.86 0.90 0.88 16140
ORGANISM 0.80 0.82 0.81 2759
SMALL_MOLECULE 0.78 0.82 0.80 4446
SUBCELLULAR 0.71 0.75 0.73 2125
TISSUE 0.70 0.75 0.73 1971
micro avg 0.79 0.82 0.81 37967
macro avg 0.76 0.79 0.78 37967
weighted avg 0.79 0.82 0.81 37967
```
|
tlkh/t5-metaphor-large | 9d70555dc26010fe75318c39f95b82e2a6116835 | 2021-09-17T03:00:53.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | tlkh | null | tlkh/t5-metaphor-large | 7 | null | transformers | 14,225 | |
tuantt/GroundNet | d6e34edd298398b09cc04e7ba3c6fd44d58c2386 | 2021-07-18T14:40:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | tuantt | null | tuantt/GroundNet | 7 | null | transformers | 14,226 | ---
tags:
- conversational
---
## A bot to chat with |
tucan9389/kcbert-base-finetuned | 9cedac9258d1470e5c48b71748c75d00477680a0 | 2021-10-21T11:53:00.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:klue",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | tucan9389 | null | tucan9389/kcbert-base-finetuned | 7 | null | transformers | 14,227 | ---
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- accuracy
model-index:
- name: kcbert-base-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: klue
type: klue
args: ynat
metrics:
- name: Accuracy
type: accuracy
value: 0.8329856154606347
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert-base-finetuned
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7393
- Accuracy: 0.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4612 | 1.0 | 2855 | 0.5216 | 0.8143 |
| 0.3061 | 2.0 | 5710 | 0.5130 | 0.8248 |
| 0.2129 | 3.0 | 8565 | 0.6062 | 0.8257 |
| 0.1337 | 4.0 | 11420 | 0.7393 | 0.8330 |
| 0.0653 | 5.0 | 14275 | 0.8651 | 0.8302 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tuhailong/chinese-roberta-wwm-ext | 1627f4f1dd27b7457faa63fa9e922cb264d83696 | 2022-04-19T12:38:22.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"dataset:dialogue",
"transformers",
"chinese-roberta-wwm-ext",
"autotrain_compatible"
]
| fill-mask | false | tuhailong | null | tuhailong/chinese-roberta-wwm-ext | 7 | null | transformers | 14,228 | ---
language: zh
tags:
- chinese-roberta-wwm-ext
datasets:
- dialogue
---
# Data
unsupervise train data is E-commerce dialogue, about 20w sentence pairs.
## Model
model is chinese-roberta-wwm-ext
### Usage
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> model = AutoModel.from_pretrained("tuhailong/chinese-roberta-wwm-ext")
>>> tokenizer = AutoTokenizer.from_pretrained("tuhailong/chinese-roberta-wwm-ext")
>>> sentences_str_list = ["今天天气不错的","天气不错的"]
>>> inputs = tokenizer(sentences_str_list,return_tensors="pt", padding='max_length', truncation=True, max_length=32)
>>> outputs = model(**inputs)
``` |
tushar-rishav/bert-finetuned-ner | ddd26cc20e8a5a4063ed849f760fa16ba124c2c8 | 2021-11-25T06:02:11.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tushar-rishav | null | tushar-rishav/bert-finetuned-ner | 7 | null | transformers | 14,229 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1196
- Precision: 0.7872
- Recall: 0.8292
- F1: 0.8077
- Accuracy: 0.9722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1243 | 1.0 | 1380 | 0.0932 | 0.6752 | 0.8222 | 0.7415 | 0.9635 |
| 0.0624 | 2.0 | 2760 | 0.0890 | 0.7298 | 0.8368 | 0.7797 | 0.9686 |
| 0.0405 | 3.0 | 4140 | 0.1029 | 0.7792 | 0.8356 | 0.8064 | 0.9715 |
| 0.0226 | 4.0 | 5520 | 0.1196 | 0.7872 | 0.8292 | 0.8077 | 0.9722 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ueb1/distilbert-base-uncased-finetuned-ner | c95320c91219a7294699416488a31756c2a298cd | 2021-10-04T18:16:48.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ueb1 | null | ueb1/distilbert-base-uncased-finetuned-ner | 7 | null | transformers | 14,230 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9290229566374626
- name: Recall
type: recall
value: 0.9371294328224634
- name: F1
type: f1
value: 0.9330585876587213
- name: Accuracy
type: accuracy
value: 0.9839547555880344
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0608
- Precision: 0.9290
- Recall: 0.9371
- F1: 0.9331
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2276 | 1.0 | 878 | 0.0685 | 0.9204 | 0.9246 | 0.9225 | 0.9814 |
| 0.0498 | 2.0 | 1756 | 0.0622 | 0.9238 | 0.9358 | 0.9298 | 0.9833 |
| 0.0298 | 3.0 | 2634 | 0.0608 | 0.9290 | 0.9371 | 0.9331 | 0.9840 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v2 | c908d59eefaa28518d7827223c00418d1b35775e | 2022-01-05T22:41:18.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"pt",
"dataset:msmarco",
"arxiv:2108.13897",
"transformers",
"msmarco",
"miniLM",
"tensorflow",
"pt-br",
"license:mit"
]
| text-classification | false | unicamp-dl | null | unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v2 | 7 | null | transformers | 14,231 | ---
language: pt
license: mit
tags:
- msmarco
- miniLM
- pytorch
- tensorflow
- pt
- pt-br
datasets:
- msmarco
widget:
- text: "Texto de exemplo em português"
inference: false
---
# mMiniLM-L6-v2 Reranker finetuned on mMARCO
## Introduction
mMiniLM-L6-v2-en-pt-msmarco-v2 is a multilingual miniLM-based model finetuned on a bilingual version of MS MARCO passage dataset. This bilingual dataset version is formed by the original MS MARCO dataset (in English) and a Portuguese translated version. In the v2 version, the Portuguese dataset was translated using Google Translate.
Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository.
## Usage
```python
from transformers import AutoTokenizer, AutoModel
model_name = 'unicamp-dl/mMiniLM-L6-v2-en-pt-msmarco-v2'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Citation
If you use mMiniLM-L6-v2-en-pt-msmarco-v2, please cite:
@misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
upskyy/kobart-summarization-v3 | 32db43c0b5c3862ea89ea58a0030ba6d0eccc91f | 2021-10-05T01:32:06.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | upskyy | null | upskyy/kobart-summarization-v3 | 7 | 1 | transformers | 14,232 | Entry not found |
vespa-engine/colbert-medium | 20b55eab56d2a1d8f716406c47001a0db912b059 | 2021-05-20T08:59:43.000Z | [
"pytorch",
"bert",
"arxiv:2004.12832",
"transformers"
]
| null | false | vespa-engine | null | vespa-engine/colbert-medium | 7 | null | transformers | 14,233 | # MS Marco Ranking with ColBERT on Vespa.ai
Model is based on [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT](https://arxiv.org/abs/2004.12832).
This BERT model is based on [google/bert_uncased_L-8_H-512_A-8](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) and trained using the
original [ColBERT training routine](https://github.com/stanford-futuredata/ColBERT/).
The model weights have been tuned by training using the `triples.train.small.tar.gz from` [MSMARCO-Passage-Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking).
To use this model with vespa.ai for MS Marco Passage Ranking, see
[MS Marco Ranking using Vespa.ai sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking).
# MS Marco Passage Ranking
| MS Marco Passage Ranking Query Set | MRR@10 ColBERT on Vespa.ai |
|------------------------------------|----------------|
| Dev | 0.354 |
| Eval | 0.347 |
The official baseline BM25 ranking model MRR@10 0.16 on eval and 0.167 on dev question set.
See [MS Marco Passage Ranking Leaderboard](https://microsoft.github.io/msmarco/).
## Export ColBERT query encoder to ONNX
We represent the ColBERT query encoder in the Vespa runtime, to map the textual query representation to the tensor representation. For this
we use Vespa's support for running ONNX models. One can use the following snippet to export the model for serving.
```python
from transformers import BertModel
from transformers import BertPreTrainedModel
from transformers import BertConfig
import torch
import torch.nn as nn
class VespaColBERT(BertPreTrainedModel):
def __init__(self,config):
super().__init__(config)
self.bert = BertModel(config)
self.linear = nn.Linear(config.hidden_size, 32, bias=False)
self.init_weights()
def forward(self, input_ids, attention_mask):
Q = self.bert(input_ids,attention_mask=attention_mask)[0]
Q = self.linear(Q)
return torch.nn.functional.normalize(Q, p=2, dim=2)
colbert_query_encoder = VespaColBERT.from_pretrained("vespa-engine/colbert-medium")
#Export model to ONNX for serving in Vespa
input_names = ["input_ids", "attention_mask"]
output_names = ["contextual"]
#input, max 32 query term
input_ids = torch.ones(1,32, dtype=torch.int64)
attention_mask = torch.ones(1,32,dtype=torch.int64)
args = (input_ids, attention_mask)
torch.onnx.export(colbert_query_encoder,
args=args,
f="query_encoder_colbert.onnx",
input_names = input_names,
output_names = output_names,
dynamic_axes = {
"input_ids": {0: "batch"},
"attention_mask": {0: "batch"},
"contextual": {0: "batch"},
},
opset_version=11)
```
# Representing the model on Vespa.ai
See [Ranking with ONNX models](https://docs.vespa.ai/documentation/onnx.html) and [MS Marco Ranking sample app](https://github.com/vespa-engine/sample-apps/tree/master/msmarco-ranking)
|
vicd/sentiment | da117e287f433ed5c6bad0ef3e8f3169cb6d3633 | 2021-05-20T22:54:45.000Z | [
"pytorch",
"jax",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | vicd | null | vicd/sentiment | 7 | null | transformers | 14,234 | Entry not found |
vidhur2k/mBERT-GermanicLang | aad867fc2edb0aa7d46fea8046523572794b44c9 | 2021-12-06T12:51:37.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | vidhur2k | null | vidhur2k/mBERT-GermanicLang | 7 | null | transformers | 14,235 | Entry not found |
vishalz/paraphrase_model | 9938fba0a1811937c21a97cd0b7d7a369ee7a6cd | 2021-09-23T10:00:25.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | vishalz | null | vishalz/paraphrase_model | 7 | null | transformers | 14,236 | pegasus paraphraser model
using <a href="https://huggingface.co/tuner007/pegasus_paraphrase" target="_blank">tuner007/pegasus_paraphrase</a> |
vishnun/t5spellcorrector | b38382384d303fae3cc75877efecce7a73bd1b65 | 2021-12-15T05:26:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | vishnun | null | vishnun/t5spellcorrector | 7 | null | transformers | 14,237 | Entry not found |
wzkariampuzha/EpiExtract4GARD | 637ae833ec586f3d98fb745ea65e3b8e58cc0469 | 2021-09-21T20:01:37.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | wzkariampuzha | null | wzkariampuzha/EpiExtract4GARD | 7 | null | transformers | 14,238 | This is the model that can extract epidemiological information from rare disease abstracts. |
ylh1013/fintune-ja-chatbot | a27ee9fc931b099b4d82ef22ba12b339af7a396c | 2022-01-23T14:21:02.000Z | [
"pytorch",
"gpt2",
"text-generation",
"finetuned_from",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-generation | false | ylh1013 | null | ylh1013/fintune-ja-chatbot | 7 | null | transformers | 14,239 | ---
language:
- finetuned_from
license: mit
tags:
- generated_from_trainer
model-index:
- name: fintune-ja-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fintune-ja-chatbot
This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Tokenizers 0.10.3
|
yosuke/bert-base-japanese-char | 804c52ff8761166f579b8fece0dc22ef07501963 | 2021-05-20T09:32:29.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | yosuke | null | yosuke/bert-base-japanese-char | 7 | null | transformers | 14,240 | Entry not found |
yuchenlin/BART0pp-base | 4f7616889c27db9b4320924b7c2165ff75f160bd | 2021-12-11T05:01:34.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | yuchenlin | null | yuchenlin/BART0pp-base | 7 | 1 | transformers | 14,241 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
TBA |
z-uo/it5-squadv1-it | ed95257d2f6d2fc67342106278905e2c20945e2b | 2021-11-01T19:49:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"dataset:z-uo/squad-it",
"transformers",
"text2text_generation",
"question_answering",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | z-uo | null | z-uo/it5-squadv1-it | 7 | 1 | transformers | 14,242 | ---
tags:
- text2text_generation
- question_answering
language:
- it
model-index:
- name: it5-squadv1-it
results: []
datasets:
- z-uo/squad-it
---
# Question and Answer with Italian T5
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB).
To use add a question + context in the same string for example:
```
In quale anno si è verificato il terremoto nel Sichuan?
Il terremoto del Sichuan del 2008 o il terremoto del Gran Sichuan, misurato a 8.0 Ms e 7.9 Mw, e si è verificato alle 02:28:01 PM China Standard Time all' epicentro (06:28:01 UTC) il 12 maggio nella provincia del Sichuan, ha ucciso 69.197 persone e lasciato 18.222 dispersi.
```
The train achieves the following results/params:
- epoch: 2.0
- train_loss: 0.1064
- train_samples: 87599
- eval_samples : 10570
- eval_gen_len : 9.2974
- eval_loss : 0.5939
- eval_rouge1 : 17.5052
- eval_rouge2 : 5.8714
- eval_rougeL : 17.4487
- eval_rougeLsum : 17.4528
# Train the model
To train the model use [this repo](https://gitlab.com/nicolalandro/qandatrain), inside you find the requirements.txt and the src to create train.
|
zharry29/order_benchmark_bert | 109b692a997a62ac74c3d50d70f1b4aa2c41c662 | 2021-05-20T09:43:21.000Z | [
"pytorch",
"jax",
"bert",
"multiple-choice",
"transformers"
]
| multiple-choice | false | zharry29 | null | zharry29/order_benchmark_bert | 7 | null | transformers | 14,243 | Entry not found |
wietsedv/xlm-roberta-base-ft-udpos28-fr | 364d328e1f8593c32eed98a1ec3e8fe25f9693e2 | 2022-02-25T09:58:30.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"fr",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-fr | 7 | null | transformers | 14,244 |
---
language:
- fr
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-fr
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 87.6
- type: accuracy
name: Dutch Test accuracy
value: 89.0
- type: accuracy
name: German Test accuracy
value: 85.5
- type: accuracy
name: Italian Test accuracy
value: 91.7
- type: accuracy
name: French Test accuracy
value: 97.1
- type: accuracy
name: Spanish Test accuracy
value: 93.4
- type: accuracy
name: Russian Test accuracy
value: 91.4
- type: accuracy
name: Swedish Test accuracy
value: 89.6
- type: accuracy
name: Norwegian Test accuracy
value: 84.3
- type: accuracy
name: Danish Test accuracy
value: 90.2
- type: accuracy
name: Low Saxon Test accuracy
value: 32.4
- type: accuracy
name: Akkadian Test accuracy
value: 24.5
- type: accuracy
name: Armenian Test accuracy
value: 87.2
- type: accuracy
name: Welsh Test accuracy
value: 69.2
- type: accuracy
name: Old East Slavic Test accuracy
value: 71.5
- type: accuracy
name: Albanian Test accuracy
value: 78.3
- type: accuracy
name: Slovenian Test accuracy
value: 80.6
- type: accuracy
name: Guajajara Test accuracy
value: 20.3
- type: accuracy
name: Kurmanji Test accuracy
value: 78.9
- type: accuracy
name: Turkish Test accuracy
value: 77.9
- type: accuracy
name: Finnish Test accuracy
value: 86.5
- type: accuracy
name: Indonesian Test accuracy
value: 84.8
- type: accuracy
name: Ukrainian Test accuracy
value: 88.9
- type: accuracy
name: Polish Test accuracy
value: 88.1
- type: accuracy
name: Portuguese Test accuracy
value: 92.3
- type: accuracy
name: Kazakh Test accuracy
value: 82.9
- type: accuracy
name: Latin Test accuracy
value: 79.6
- type: accuracy
name: Old French Test accuracy
value: 68.2
- type: accuracy
name: Buryat Test accuracy
value: 53.6
- type: accuracy
name: Kaapor Test accuracy
value: 15.0
- type: accuracy
name: Korean Test accuracy
value: 64.3
- type: accuracy
name: Estonian Test accuracy
value: 87.5
- type: accuracy
name: Croatian Test accuracy
value: 89.5
- type: accuracy
name: Gothic Test accuracy
value: 11.6
- type: accuracy
name: Swiss German Test accuracy
value: 39.5
- type: accuracy
name: Assyrian Test accuracy
value: 14.8
- type: accuracy
name: North Sami Test accuracy
value: 27.0
- type: accuracy
name: Naija Test accuracy
value: 36.9
- type: accuracy
name: Latvian Test accuracy
value: 87.7
- type: accuracy
name: Chinese Test accuracy
value: 44.1
- type: accuracy
name: Tagalog Test accuracy
value: 72.8
- type: accuracy
name: Bambara Test accuracy
value: 24.7
- type: accuracy
name: Lithuanian Test accuracy
value: 86.9
- type: accuracy
name: Galician Test accuracy
value: 91.6
- type: accuracy
name: Vietnamese Test accuracy
value: 67.0
- type: accuracy
name: Greek Test accuracy
value: 88.0
- type: accuracy
name: Catalan Test accuracy
value: 92.5
- type: accuracy
name: Czech Test accuracy
value: 89.7
- type: accuracy
name: Erzya Test accuracy
value: 41.2
- type: accuracy
name: Bhojpuri Test accuracy
value: 48.9
- type: accuracy
name: Thai Test accuracy
value: 56.3
- type: accuracy
name: Marathi Test accuracy
value: 83.4
- type: accuracy
name: Basque Test accuracy
value: 75.9
- type: accuracy
name: Slovak Test accuracy
value: 91.1
- type: accuracy
name: Kiche Test accuracy
value: 32.5
- type: accuracy
name: Yoruba Test accuracy
value: 19.4
- type: accuracy
name: Warlpiri Test accuracy
value: 26.3
- type: accuracy
name: Tamil Test accuracy
value: 83.5
- type: accuracy
name: Maltese Test accuracy
value: 17.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 60.2
- type: accuracy
name: Icelandic Test accuracy
value: 83.2
- type: accuracy
name: Mbya Guarani Test accuracy
value: 26.1
- type: accuracy
name: Urdu Test accuracy
value: 67.5
- type: accuracy
name: Romanian Test accuracy
value: 87.1
- type: accuracy
name: Persian Test accuracy
value: 78.6
- type: accuracy
name: Apurina Test accuracy
value: 26.1
- type: accuracy
name: Japanese Test accuracy
value: 32.3
- type: accuracy
name: Hungarian Test accuracy
value: 86.3
- type: accuracy
name: Hindi Test accuracy
value: 73.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 28.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 35.0
- type: accuracy
name: Faroese Test accuracy
value: 75.7
- type: accuracy
name: Sanskrit Test accuracy
value: 17.9
- type: accuracy
name: Livvi Test accuracy
value: 53.2
- type: accuracy
name: Arabic Test accuracy
value: 83.1
- type: accuracy
name: Wolof Test accuracy
value: 24.6
- type: accuracy
name: Bulgarian Test accuracy
value: 90.9
- type: accuracy
name: Akuntsu Test accuracy
value: 35.2
- type: accuracy
name: Makurap Test accuracy
value: 13.0
- type: accuracy
name: Kangri Test accuracy
value: 43.0
- type: accuracy
name: Breton Test accuracy
value: 67.7
- type: accuracy
name: Telugu Test accuracy
value: 83.6
- type: accuracy
name: Cantonese Test accuracy
value: 51.6
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 43.3
- type: accuracy
name: Karelian Test accuracy
value: 67.3
- type: accuracy
name: Upper Sorbian Test accuracy
value: 65.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 29.5
- type: accuracy
name: Irish Test accuracy
value: 69.4
- type: accuracy
name: Nayini Test accuracy
value: 48.7
- type: accuracy
name: Munduruku Test accuracy
value: 19.9
- type: accuracy
name: Manx Test accuracy
value: 27.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 26.9
- type: accuracy
name: Afrikaans Test accuracy
value: 84.9
- type: accuracy
name: Old Turkish Test accuracy
value: 38.0
- type: accuracy
name: Tupinamba Test accuracy
value: 22.8
- type: accuracy
name: Belarusian Test accuracy
value: 89.5
- type: accuracy
name: Serbian Test accuracy
value: 90.8
- type: accuracy
name: Moksha Test accuracy
value: 39.0
- type: accuracy
name: Western Armenian Test accuracy
value: 76.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 60.0
- type: accuracy
name: Khunsari Test accuracy
value: 35.1
- type: accuracy
name: Hebrew Test accuracy
value: 94.8
- type: accuracy
name: Uyghur Test accuracy
value: 75.2
- type: accuracy
name: Chukchi Test accuracy
value: 30.9
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: French
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fr")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fr")
```
|
DoyyingFace/bert-asian-hate-tweets-self-clean | 860abaebba3b9c9f197cd4b0fd7b7949257568b2 | 2022-02-24T10:38:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean | 7 | null | transformers | 14,245 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12 | d56816217660830a0082c06d0585d8bfb209b5ad | 2022-02-24T16:55:50.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12 | 7 | null | transformers | 14,246 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100 | 90bd5f78031d78d9bc39818b6c30f5f08ea7584f | 2022-02-25T09:20:41.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100 | 7 | null | transformers | 14,247 | Entry not found |
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-8 | c43f1d91737328e3029b942f4b77d09a10e795f8 | 2022-02-25T18:42:10.000Z | [
"pytorch",
"roberta",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | anas-awadalla | null | anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-8 | 7 | null | transformers | 14,248 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-few-shot-k-1024-finetuned-squad-seed-8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5-freeze4 | e95c364dcc0efe2e160dd29fe459f7067a911658 | 2022-02-26T03:38:59.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5-freeze4 | 7 | null | transformers | 14,249 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-50 | a350ebe5555728730cae20ee490056ad92d5c532 | 2022-02-26T03:50:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-warmup-50 | 7 | null | transformers | 14,250 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5-warmup-50 | b3761f9f289e3b514513ba691e8a9bc74fcf5c68 | 2022-02-26T03:56:12.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-epoch5-warmup-50 | 7 | null | transformers | 14,251 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-clean-small-discriminate | 457433ae463c086540982bfbbc5f9dff9b8184e9 | 2022-02-26T04:29:40.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-clean-small-discriminate | 7 | null | transformers | 14,252 | Entry not found |
bookbot/wav2vec2-adult-child-cls | e43c337cb9186bc7c47da29b08509a43cf66f542 | 2022-02-26T13:39:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"en",
"arxiv:2006.11477",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| audio-classification | false | bookbot | null | bookbot/wav2vec2-adult-child-cls | 7 | null | transformers | 14,253 | ---
language: en
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: wav2vec2-adult-child-cls
results: []
---
# Wav2Vec2 Adult/Child Speech Classifier
Wav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a fine-tuned version of [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on a private adult/child speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| -------------------------- | ------- | ----------- | ----------------------------------------- |
| `wav2vec2-adult-child-cls` | 91M | wav2vec 2.0 | Adult/Child Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| --------------------------------- | ------ | -------- | ------ |
| Adult/Child Speech Classification | 0.1682 | 95.80% | 0.9618 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 32
- `eval_batch_size`: 32
- `seed`: 42
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.2709 | 1.0 | 384 | 0.2616 | 0.9104 | 0.9142 |
| 0.2112 | 2.0 | 768 | 0.1826 | 0.9386 | 0.9421 |
| 0.1755 | 3.0 | 1152 | 0.1898 | 0.9354 | 0.9428 |
| 0.0915 | 4.0 | 1536 | 0.1682 | 0.9580 | 0.9618 |
| 0.1042 | 5.0 | 1920 | 0.1717 | 0.9511 | 0.9554 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 Adult/Child Speech Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
saattrupdan/voxpopuli-wav2vec2-large-cv8-da | fddf4d0facb512e71eacb39d11426f0b715c87a1 | 2022-03-22T09:58:54.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:common_voice_8_0",
"transformers",
"license:cc-by-nc-4.0",
"model-index"
]
| automatic-speech-recognition | false | saattrupdan | null | saattrupdan/voxpopuli-wav2vec2-large-cv8-da | 7 | null | transformers | 14,254 | ---
language:
- da
license: cc-by-nc-4.0
tasks:
- automatic-speech-recognition
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: voxpopuli-wav2vec2-large-cv8-da
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_8_0
args: da
name: Danish Common Voice 8.0
metrics:
- type: wer
value: 40.54
- task:
type: automatic-speech-recognition
dataset:
type: Alvenir/alvenir_asr_da_eval
name: Alvenir ASR test dataset
metrics:
- type: wer
value: 40.66
---
# VoxPopuli-Wav2vec2-large-CV8-da
## Model description
This model is a fine-tuned version of the Swedish acoustic model [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) on the Danish part of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), containing ~6 crowdsourced hours of read-aloud Danish speech.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 48.04 | 40.54 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 48.43 | 40.66 | |
ali2066/finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56 | e2e2cdc674445104e54cd06df22037258569e293 | 2022-02-27T18:38:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ali2066 | null | ali2066/finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56 | 7 | null | transformers | 14,255 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3767
- Accuracy: 0.8638
- F1: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4489 | 0.8309 | 0.8969 |
| No log | 2.0 | 162 | 0.4429 | 0.8272 | 0.8915 |
| No log | 3.0 | 243 | 0.5154 | 0.8529 | 0.9083 |
| No log | 4.0 | 324 | 0.5552 | 0.8309 | 0.8925 |
| No log | 5.0 | 405 | 0.5896 | 0.8309 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
yunsizhang/distilbert-base-uncased-finetuned-emotion | a4e6e31300b723f649e8e65e6df0731d9943e59b | 2022-02-28T06:15:56.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | yunsizhang | null | yunsizhang/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,256 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9259345317772325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2292
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8732 | 1.0 | 250 | 0.3363 | 0.903 | 0.9002 |
| 0.2645 | 2.0 | 500 | 0.2292 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
peterhsu/results-mt5-finetuned-squad-accelerate | acd8c7de58bdd84abcde37396010e72a9cc0f543 | 2022-03-07T10:33:14.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | peterhsu | null | peterhsu/results-mt5-finetuned-squad-accelerate | 7 | null | transformers | 14,257 | Entry not found |
coastalcph/fairlex-ecthr-minilm | ee483c173c003add5edcb273307c59eece113be8 | 2022-03-01T13:18:23.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"transformers",
"legal",
"fairlex",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | coastalcph | null | coastalcph/fairlex-ecthr-minilm | 7 | 1 | transformers | 14,258 | ---
language: en
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "The applicant submitted that her husband was subjected to treatment amounting to <mask> whilst in the custody of Adana Security Directorate"
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-ecthr-minlm")
model = AutoModel.from_pretrained("coastalcph/fairlex-ecthr-minlm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) | |
BigSalmon/InformalToFormalLincoln25 | 9c7551ced310d674d309d955727bbea5b63356d6 | 2022-03-08T23:17:13.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln25 | 7 | null | transformers | 14,259 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln25")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln25")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
mcdzwil/bert-base-NER-finetuned-ner | ee0d6f2866638f7dbaf05377cd669ad550ebd451 | 2022-03-02T16:53:52.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | mcdzwil | null | mcdzwil/bert-base-NER-finetuned-ner | 7 | null | transformers | 14,260 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-NER-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1670
- Precision: 0.8358
- Recall: 0.7615
- F1: 0.7969
- Accuracy: 0.9437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 48 | 0.1892 | 0.8240 | 0.7267 | 0.7723 | 0.9341 |
| No log | 2.0 | 96 | 0.1812 | 0.8667 | 0.7458 | 0.8017 | 0.9441 |
| No log | 3.0 | 144 | 0.1670 | 0.8358 | 0.7615 | 0.7969 | 0.9437 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
clapika2010/hospital_finetuned | 835ff60d6be2da91c124aff7c00239c1b46cedd2 | 2022-03-11T20:44:57.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | clapika2010 | null | clapika2010/hospital_finetuned | 7 | null | transformers | 14,261 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unclean-large-epoch5 | d364b7adf34892c33607cea1336260d8cd97a121 | 2022-03-03T09:17:32.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-large-epoch5 | 7 | null | transformers | 14,262 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unclean-focus | 221b7dd6ec04ec2386062739894062f032f0b05c | 2022-03-03T10:47:52.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-focus | 7 | null | transformers | 14,263 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unclean-focus_epoch5 | 00547ae4c7821ed30f9d665e4db35b610cf7c48f | 2022-03-03T10:55:00.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-focus_epoch5 | 7 | null | transformers | 14,264 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unclean-with-asian | 3979910d674ee552fe1372819e7cb36b797c1bd5 | 2022-03-03T11:02:01.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-with-asian | 7 | null | transformers | 14,265 | Entry not found |
DoyyingFace/bert-asian-hate-tweets-self-unclean-with-asian-epoch5 | 95edbc434054ee87d1d4fe1ed89e694061b12eab | 2022-03-03T11:09:07.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | DoyyingFace | null | DoyyingFace/bert-asian-hate-tweets-self-unclean-with-asian-epoch5 | 7 | null | transformers | 14,266 | Entry not found |
pritamdeka/PubMedBert-PubMed200kRCT | f2c7902b96f1b63d7802fb476caec45e82a04726 | 2022-03-10T15:03:23.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | pritamdeka | null | pritamdeka/PubMedBert-PubMed200kRCT | 7 | null | transformers | 14,267 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
widget:
- text: "SAMPLE 32,441 archived appendix samples fixed in formalin and embedded in paraffin and tested for the presence of abnormal prion protein (PrP)."
model-index:
- name: PubMedBert-PubMed200kRCT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBert-PubMed200kRCT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [PubMed200kRCT](https://github.com/Franck-Dernoncourt/pubmed-rct/tree/master/PubMed_200k_RCT) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2833
- Accuracy: 0.8942
## Model description
More information needed
## Intended uses & limitations
The model can be used for text classification tasks of Randomized Controlled Trials that does not have any structure. The text can be classified as one of the following:
* BACKGROUND
* CONCLUSIONS
* METHODS
* OBJECTIVE
* RESULTS
The model can be directly used like this:
```python
from transformers import TextClassificationPipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("pritamdeka/PubMedBert-PubMed200kRCT")
tokenizer = AutoTokenizer.from_pretrained("pritamdeka/PubMedBert-PubMed200kRCT")
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True)
pipe("Treatment of 12 healthy female subjects with CDCA for 2 days resulted in increased BAT activity.")
```
Results will be shown as follows:
```python
[[{'label': 'BACKGROUND', 'score': 0.0028450002428144217},
{'label': 'CONCLUSIONS', 'score': 0.2581048607826233},
{'label': 'METHODS', 'score': 0.015086210332810879},
{'label': 'OBJECTIVE', 'score': 0.0016815993003547192},
{'label': 'RESULTS', 'score': 0.7222822904586792}]]
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3604 | 0.14 | 5000 | 0.3162 | 0.8821 |
| 0.3326 | 0.29 | 10000 | 0.3112 | 0.8843 |
| 0.3293 | 0.43 | 15000 | 0.3044 | 0.8870 |
| 0.3246 | 0.58 | 20000 | 0.3040 | 0.8871 |
| 0.32 | 0.72 | 25000 | 0.2969 | 0.8888 |
| 0.3143 | 0.87 | 30000 | 0.2929 | 0.8903 |
| 0.3095 | 1.01 | 35000 | 0.2917 | 0.8899 |
| 0.2844 | 1.16 | 40000 | 0.2957 | 0.8886 |
| 0.2778 | 1.3 | 45000 | 0.2943 | 0.8906 |
| 0.2779 | 1.45 | 50000 | 0.2890 | 0.8935 |
| 0.2752 | 1.59 | 55000 | 0.2881 | 0.8919 |
| 0.2736 | 1.74 | 60000 | 0.2835 | 0.8944 |
| 0.2725 | 1.88 | 65000 | 0.2833 | 0.8942 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
mp6kv/pump_intent_test | 1219c2981aa05dc259e800fcabfd985118ac6b55 | 2022-03-24T18:40:12.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | mp6kv | null | mp6kv/pump_intent_test | 7 | null | transformers | 14,268 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: pump_intent_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pump_intent_test
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
Custom data generated labeling text according to these three categories.
These three categories are the subcategories of Pump - essentially when a user asks a question and expects an answer in response
- Value: a slot value or a calculation
- Clarification: Asking for further information on a previous answer
- Testing: Testing for knowledge of facts and definitions
Takes a user input of string text and classifies it according to one of three categories.
## Intended uses & limitations
from transformers import pipeline
classifier = pipeline("text-classification",model="mp6kv/pump_intent_test")
output = classifier("What is the value of the length of the blue object?")
score = output[0]['score']
label = output[0]['label']
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
ttmusic/distilbert-base-uncased-finetuned-imdb-accelerate | 7cd252df4decfbd686a53efe204db1c85853f561 | 2022-03-06T08:03:13.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | ttmusic | null | ttmusic/distilbert-base-uncased-finetuned-imdb-accelerate | 7 | null | transformers | 14,269 | Entry not found |
gayanin/t5-small-mlm-paraphrasing | f65b3eecede7c4573e54ea87b49f0ab372630802 | 2022-03-07T13:35:36.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | gayanin | null | gayanin/t5-small-mlm-paraphrasing | 7 | null | transformers | 14,270 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-mlm-paraphrasing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mlm-paraphrasing
This model is a fine-tuned version of [gayanin/t5-small-mlm-pubmed](https://huggingface.co/gayanin/t5-small-mlm-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4243
- Rouge2 Precision: 0.8281
- Rouge2 Recall: 0.6508
- Rouge2 Fmeasure: 0.7125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.6445 | 0.75 | 500 | 0.5049 | 0.821 | 0.6477 | 0.7078 |
| 0.5227 | 1.51 | 1000 | 0.4748 | 0.8243 | 0.6492 | 0.7099 |
| 0.5126 | 2.26 | 1500 | 0.4594 | 0.8254 | 0.6506 | 0.7111 |
| 0.4858 | 3.02 | 2000 | 0.4492 | 0.8266 | 0.651 | 0.712 |
| 0.4669 | 3.77 | 2500 | 0.4421 | 0.8268 | 0.6508 | 0.7118 |
| 0.4684 | 4.52 | 3000 | 0.4374 | 0.8272 | 0.6513 | 0.7124 |
| 0.463 | 5.28 | 3500 | 0.4342 | 0.8274 | 0.6508 | 0.712 |
| 0.4558 | 6.03 | 4000 | 0.4301 | 0.8278 | 0.6508 | 0.7123 |
| 0.4553 | 6.79 | 4500 | 0.4283 | 0.8279 | 0.6507 | 0.7122 |
| 0.443 | 7.54 | 5000 | 0.4259 | 0.8281 | 0.6511 | 0.7125 |
| 0.441 | 8.3 | 5500 | 0.4263 | 0.828 | 0.6503 | 0.7121 |
| 0.444 | 9.05 | 6000 | 0.4244 | 0.8281 | 0.6507 | 0.7125 |
| 0.4392 | 9.8 | 6500 | 0.4243 | 0.8281 | 0.6508 | 0.7125 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
MrAnderson/yoso-512-full-trivia | c1ddcce00f4a0f5c3dbfc9dcb219292cb5aafe07 | 2022-03-07T21:31:29.000Z | [
"pytorch",
"yoso",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | MrAnderson | null | MrAnderson/yoso-512-full-trivia | 7 | null | transformers | 14,271 | Entry not found |
Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch | f63d35b4440607ce5ba2c461386772107e50d784 | 2022-03-08T05:53:14.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"summarization",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | Ameer05 | null | Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch | 7 | null | transformers | 14,272 | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch
This model is a fine-tuned version of [Ameer05/model-token-repo](https://huggingface.co/Ameer05/model-token-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5216
- Rouge1: 59.5791
- Rouge2: 51.3273
- Rougel: 56.9984
- Rougelsum: 59.1424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 2.0124 | 53.776 | 46.7427 | 50.7565 | 53.5502 |
| No log | 1.91 | 10 | 1.6353 | 61.8019 | 53.8614 | 58.9744 | 61.339 |
| No log | 2.91 | 15 | 1.5321 | 59.7045 | 51.5968 | 57.0823 | 59.2417 |
| No log | 3.91 | 20 | 1.4569 | 62.4379 | 54.5464 | 59.9202 | 61.9242 |
| 1.5608 | 4.91 | 25 | 1.4613 | 63.3808 | 55.8818 | 61.432 | 63.0208 |
| 1.5608 | 5.91 | 30 | 1.4321 | 59.6761 | 50.9812 | 56.7977 | 59.1214 |
| 1.5608 | 6.91 | 35 | 1.4753 | 62.6439 | 54.7158 | 60.3831 | 62.1046 |
| 1.5608 | 7.91 | 40 | 1.4783 | 60.2735 | 52.7462 | 57.77 | 59.9725 |
| 0.6428 | 8.91 | 45 | 1.4974 | 62.8691 | 54.9062 | 60.3496 | 62.5132 |
| 0.6428 | 9.91 | 50 | 1.5216 | 59.5791 | 51.3273 | 56.9984 | 59.1424 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.10.3
|
lewtun/wav2vec2-base-100k-voxpopuli-finetuned-gtzan | 6dab5e3179c3df6b00bcde0835709995aba33dde | 2022-03-14T16:54:23.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"transformers",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"model-index"
]
| audio-classification | false | lewtun | null | lewtun/wav2vec2-base-100k-voxpopuli-finetuned-gtzan | 7 | null | transformers | 14,273 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-100k-voxpopuli-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-100k-voxpopuli-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2-base-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-base-100k-voxpopuli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9408
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 225 | 2.1672 | 0.3 |
| 2.1675 | 2.0 | 450 | 2.0095 | 0.29 |
| 2.1675 | 3.0 | 675 | 1.7326 | 0.29 |
| 1.7199 | 4.0 | 900 | 1.4980 | 0.49 |
| 1.7199 | 5.0 | 1125 | 1.4088 | 0.37 |
| 1.3585 | 6.0 | 1350 | 1.2238 | 0.54 |
| 1.3585 | 7.0 | 1575 | 1.3579 | 0.52 |
| 1.0599 | 8.0 | 1800 | 0.9954 | 0.62 |
| 1.0599 | 9.0 | 2025 | 0.9543 | 0.73 |
| 0.8337 | 10.0 | 2250 | 0.9428 | 0.76 |
| 0.8337 | 11.0 | 2475 | 0.8810 | 0.78 |
| 0.5861 | 12.0 | 2700 | 0.7753 | 0.76 |
| 0.5861 | 13.0 | 2925 | 0.9981 | 0.74 |
| 0.3662 | 14.0 | 3150 | 1.1597 | 0.77 |
| 0.3662 | 15.0 | 3375 | 1.0466 | 0.79 |
| 0.277 | 16.0 | 3600 | 1.0763 | 0.81 |
| 0.277 | 17.0 | 3825 | 0.8407 | 0.87 |
| 0.1731 | 18.0 | 4050 | 0.9317 | 0.86 |
| 0.1731 | 19.0 | 4275 | 0.8545 | 0.87 |
| 0.1489 | 20.0 | 4500 | 0.9408 | 0.86 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.4
- Tokenizers 0.11.6
|
lijingxin/distilbert-base-uncased-finetuned-emotion | 2ac68ffb0399344044517c10121cb0df2f3ccdb5 | 2022-03-09T10:26:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lijingxin | null | lijingxin/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,274 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9226367098786769
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8009 | 1.0 | 250 | 0.3027 | 0.9045 | 0.9015 |
| 0.2402 | 2.0 | 500 | 0.2161 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ctoraman/RoBERTa-TR-medium-wp-66k | 6bfda8b2ab8eeb4801832ca95ad11c3d5eb0e90d | 2022-04-20T07:01:39.000Z | [
"pytorch",
"roberta",
"fill-mask",
"tr",
"dataset:oscar",
"arxiv:2204.08832",
"transformers",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
]
| fill-mask | false | ctoraman | null | ctoraman/RoBERTa-TR-medium-wp-66k | 7 | null | transformers | 14,275 | ---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium WordPiece 66k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 66.7k.
The details and performance comparisons can be found at this paper:
https://arxiv.org/abs/2204.08832
The following code can be used for model loading and tokenization, example max length (514) can be changed:
```
model = AutoModel.from_pretrained([model_path])
#for sequence classification:
#model = AutoModelForSequenceClassification.from_pretrained([model_path], num_labels=[num_classes])
tokenizer = PreTrainedTokenizerFast(tokenizer_file=[file_path])
tokenizer.mask_token = "[MASK]"
tokenizer.cls_token = "[CLS]"
tokenizer.sep_token = "[SEP]"
tokenizer.pad_token = "[PAD]"
tokenizer.unk_token = "[UNK]"
tokenizer.bos_token = "[CLS]"
tokenizer.eos_token = "[SEP]"
tokenizer.model_max_length = 514
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2204.08832,
doi = {10.48550/ARXIV.2204.08832},
url = {https://arxiv.org/abs/2204.08832},
author = {Toraman, Cagri and Yilmaz, Eyup Halit and Şahinuç, Furkan and Ozcelik, Oguzhan},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Impact of Tokenization on Language Models: An Analysis for Turkish},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
``` |
Sh3ra/arabert-finetuned-arcd | ab9a55cbc7321f0f260f49117619f851ae3ac20f | 2022-03-09T13:30:46.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | Sh3ra | null | Sh3ra/arabert-finetuned-arcd | 7 | null | transformers | 14,276 | Entry not found |
orzhan/ruroberta-ruatd-multi | 83d746d87354f7ac11fc4becbb6c1c39ae08f3d0 | 2022-03-09T15:35:12.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | orzhan | null | orzhan/ruroberta-ruatd-multi | 7 | null | transformers | 14,277 | Entry not found |
aaraki/distilbert-base-uncased-finetuned-ner | b44566a380890d27e811c8eac0bd61375f4fae84 | 2022-03-10T01:42:14.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | aaraki | null | aaraki/distilbert-base-uncased-finetuned-ner | 7 | null | transformers | 14,278 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8856800348735833
- name: Recall
type: recall
value: 0.9091620986687549
- name: F1
type: f1
value: 0.8972674579078112
- name: Accuracy
type: accuracy
value: 0.9774572259202186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0788
- Precision: 0.8857
- Recall: 0.9092
- F1: 0.8973
- Accuracy: 0.9775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2473 | 1.0 | 878 | 0.0788 | 0.8857 | 0.9092 | 0.8973 | 0.9775 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
muneson/xls-r-ab-test | 67ae061bc118e87a2a498c30301a9c7ece69302b | 2022-03-10T13:49:29.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ab",
"dataset:common_voice",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | muneson | null | muneson/xls-r-ab-test | 7 | null | transformers | 14,279 | ---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 207.6055
- Wer: 1.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.5.dev0
- Tokenizers 0.11.6
|
Someshfengde/autonlp-kaggledays-625717992 | 0e4143d7bc8074472bf24bfd87b826df23904088 | 2022-03-10T15:01:53.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:Someshfengde/autonlp-data-kaggledays",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | Someshfengde | null | Someshfengde/autonlp-kaggledays-625717992 | 7 | null | transformers | 14,280 | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- Someshfengde/autonlp-data-kaggledays
co2_eq_emissions: 28.622267513847273
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 625717992
- CO2 Emissions (in grams): 28.622267513847273
## Validation Metrics
- Loss: 0.8782362937927246
- Accuracy: 0.6022282660559214
- Macro F1: 0.6024258279848015
- Micro F1: 0.6022282660559214
- Weighted F1: 0.6024299908624371
- Macro Precision: 0.604093172183357
- Micro Precision: 0.6022282660559214
- Weighted Precision: 0.6041166306778806
- Macro Recall: 0.6022424576798522
- Micro Recall: 0.6022282660559214
- Weighted Recall: 0.6022282660559214
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Someshfengde/autonlp-kaggledays-625717992
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Someshfengde/autonlp-kaggledays-625717992", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Someshfengde/autonlp-kaggledays-625717992", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
calebcsjm/reverse_text_generation_HarryPotter | 6dc9d94201086e34489739114d1756992dae2c7a | 2022-03-12T06:51:24.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | calebcsjm | null | calebcsjm/reverse_text_generation_HarryPotter | 7 | null | transformers | 14,281 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reverse_text_generation_HarryPotter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reverse_text_generation_HarryPotter
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
davanstrien/test_mae_flysheet | 82e0a29c778733ecadd692a457efe93f352a63a5 | 2022-03-13T17:00:03.000Z | [
"pytorch",
"tensorboard",
"vit_mae",
"pretraining",
"dataset:image_folder",
"transformers",
"masked-auto-encoding",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| null | false | davanstrien | null | davanstrien/test_mae_flysheet | 7 | null | transformers | 14,282 | ---
license: apache-2.0
tags:
- masked-auto-encoding
- generated_from_trainer
datasets:
- image_folder
model-index:
- name: test_mae_flysheet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_mae_flysheet
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/flysheet dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.284 | 1.0 | 28 | 2.2812 |
| 2.137 | 2.0 | 56 | 2.0288 |
| 1.6016 | 3.0 | 84 | 1.2437 |
| 0.8055 | 4.0 | 112 | 0.7419 |
| 0.5304 | 5.0 | 140 | 0.5151 |
| 0.4873 | 6.0 | 168 | 0.4884 |
| 0.442 | 7.0 | 196 | 0.4441 |
| 0.4039 | 8.0 | 224 | 0.4159 |
| 0.3866 | 9.0 | 252 | 0.3975 |
| 0.391 | 10.0 | 280 | 0.3869 |
| 0.3549 | 11.0 | 308 | 0.3801 |
| 0.3462 | 12.0 | 336 | 0.3577 |
| 0.3402 | 13.0 | 364 | 0.3519 |
| 0.3357 | 14.0 | 392 | 0.3447 |
| 0.3474 | 15.0 | 420 | 0.3369 |
| 0.3254 | 16.0 | 448 | 0.3386 |
| 0.3033 | 17.0 | 476 | 0.3294 |
| 0.3047 | 18.0 | 504 | 0.3274 |
| 0.3103 | 19.0 | 532 | 0.3209 |
| 0.3067 | 20.0 | 560 | 0.3186 |
| 0.2959 | 21.0 | 588 | 0.3190 |
| 0.2899 | 22.0 | 616 | 0.3147 |
| 0.2872 | 23.0 | 644 | 0.3082 |
| 0.2956 | 24.0 | 672 | 0.3070 |
| 0.2865 | 25.0 | 700 | 0.3072 |
| 0.2947 | 26.0 | 728 | 0.3072 |
| 0.2811 | 27.0 | 756 | 0.3131 |
| 0.2935 | 28.0 | 784 | 0.3069 |
| 0.2814 | 29.0 | 812 | 0.3043 |
| 0.2753 | 30.0 | 840 | 0.2984 |
| 0.2823 | 31.0 | 868 | 0.2995 |
| 0.2962 | 32.0 | 896 | 0.3012 |
| 0.2869 | 33.0 | 924 | 0.3050 |
| 0.2833 | 34.0 | 952 | 0.2960 |
| 0.2892 | 35.0 | 980 | 0.3039 |
| 0.2764 | 36.0 | 1008 | 0.3010 |
| 0.2807 | 37.0 | 1036 | 0.2998 |
| 0.2843 | 38.0 | 1064 | 0.2989 |
| 0.2808 | 39.0 | 1092 | 0.2970 |
| 0.2862 | 40.0 | 1120 | 0.2940 |
| 0.2601 | 41.0 | 1148 | 0.2952 |
| 0.2742 | 42.0 | 1176 | 0.2940 |
| 0.2791 | 43.0 | 1204 | 0.2997 |
| 0.2759 | 44.0 | 1232 | 0.2951 |
| 0.2819 | 45.0 | 1260 | 0.2896 |
| 0.287 | 46.0 | 1288 | 0.2938 |
| 0.2711 | 47.0 | 1316 | 0.2973 |
| 0.2782 | 48.0 | 1344 | 0.2946 |
| 0.2674 | 49.0 | 1372 | 0.2913 |
| 0.268 | 50.0 | 1400 | 0.2944 |
| 0.2624 | 51.0 | 1428 | 0.2940 |
| 0.2842 | 52.0 | 1456 | 0.2978 |
| 0.2753 | 53.0 | 1484 | 0.2951 |
| 0.2733 | 54.0 | 1512 | 0.2880 |
| 0.2782 | 55.0 | 1540 | 0.2969 |
| 0.2789 | 56.0 | 1568 | 0.2919 |
| 0.2815 | 57.0 | 1596 | 0.2916 |
| 0.2629 | 58.0 | 1624 | 0.2947 |
| 0.2716 | 59.0 | 1652 | 0.2828 |
| 0.2623 | 60.0 | 1680 | 0.2924 |
| 0.2773 | 61.0 | 1708 | 0.2765 |
| 0.268 | 62.0 | 1736 | 0.2754 |
| 0.2839 | 63.0 | 1764 | 0.2744 |
| 0.2684 | 64.0 | 1792 | 0.2744 |
| 0.2865 | 65.0 | 1820 | 0.2716 |
| 0.2845 | 66.0 | 1848 | 0.2769 |
| 0.2663 | 67.0 | 1876 | 0.2754 |
| 0.269 | 68.0 | 1904 | 0.2737 |
| 0.2681 | 69.0 | 1932 | 0.2697 |
| 0.2748 | 70.0 | 1960 | 0.2779 |
| 0.2769 | 71.0 | 1988 | 0.2728 |
| 0.2805 | 72.0 | 2016 | 0.2729 |
| 0.2771 | 73.0 | 2044 | 0.2728 |
| 0.2717 | 74.0 | 2072 | 0.2749 |
| 0.267 | 75.0 | 2100 | 0.2732 |
| 0.2812 | 76.0 | 2128 | 0.2743 |
| 0.2749 | 77.0 | 2156 | 0.2739 |
| 0.2746 | 78.0 | 2184 | 0.2730 |
| 0.2707 | 79.0 | 2212 | 0.2743 |
| 0.2644 | 80.0 | 2240 | 0.2740 |
| 0.2691 | 81.0 | 2268 | 0.2727 |
| 0.2679 | 82.0 | 2296 | 0.2771 |
| 0.2748 | 83.0 | 2324 | 0.2744 |
| 0.2744 | 84.0 | 2352 | 0.2703 |
| 0.2715 | 85.0 | 2380 | 0.2733 |
| 0.2682 | 86.0 | 2408 | 0.2715 |
| 0.2641 | 87.0 | 2436 | 0.2722 |
| 0.274 | 88.0 | 2464 | 0.2748 |
| 0.2669 | 89.0 | 2492 | 0.2753 |
| 0.2707 | 90.0 | 2520 | 0.2724 |
| 0.2755 | 91.0 | 2548 | 0.2703 |
| 0.2769 | 92.0 | 2576 | 0.2737 |
| 0.2659 | 93.0 | 2604 | 0.2721 |
| 0.2674 | 94.0 | 2632 | 0.2763 |
| 0.2723 | 95.0 | 2660 | 0.2723 |
| 0.2723 | 96.0 | 2688 | 0.2744 |
| 0.272 | 97.0 | 2716 | 0.2686 |
| 0.27 | 98.0 | 2744 | 0.2728 |
| 0.2721 | 99.0 | 2772 | 0.2743 |
| 0.2692 | 100.0 | 2800 | 0.2748 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
1one5ome/gpt2-chinese-gulong | 27570de9fe67d03218aa7ea69087e714449e6e09 | 2022-03-16T07:11:44.000Z | [
"pytorch",
"transformers",
"license:mit"
]
| null | false | 1one5ome | null | 1one5ome/gpt2-chinese-gulong | 7 | 0 | transformers | 14,283 | ---
license: mit
---
This model can generate Gu Long style Wuxia context given a prefix. For more information, please refer to [here](https://github.com/1one5ome/GPT2-Chinese-Gulong). |
GPL/bioasq-1m-distilbert-tas-b-gpl-self_miner | fe50f47fae9f310e60077eaad362976d3beb9341 | 2022-03-14T14:17:47.000Z | [
"pytorch",
"distilbert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | GPL | null | GPL/bioasq-1m-distilbert-tas-b-gpl-self_miner | 7 | null | sentence-transformers | 14,284 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dennishauser/distilbert-base-uncased-finetuned-emotion | cbcc06d8a2b7c6576930fbf2b190b7cfb66d82d3 | 2022-03-30T12:23:53.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | dennishauser | null | dennishauser/distilbert-base-uncased-finetuned-emotion | 7 | null | transformers | 14,285 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2128
- Accuracy: 0.7597
- F1: 0.6574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3846 | 1.0 | 243 | 1.2627 | 0.7598 | 0.6561 |
| 1.0463 | 2.0 | 486 | 1.2128 | 0.7597 | 0.6574 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/regnet-x-040 | bd484ecdd0284c5666268efcb5b91ba531eb0462 | 2022-06-30T18:57:14.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-x-040 | 7 | null | transformers | 14,286 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-x-040")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-x-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
aymanm419/araSpeedest | a7576428d6b634ead074fc2f3975399293192a3e | 2022-03-16T00:00:53.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | aymanm419 | null | aymanm419/araSpeedest | 7 | null | transformers | 14,287 | Entry not found |
edbeeching/decision-transformer-gym-halfcheetah-medium-replay | 09999d0df56e262b9ac9f77e6b8e676653ce4676 | 2022-06-29T19:21:08.000Z | [
"pytorch",
"decision_transformer",
"feature-extraction",
"arxiv:2106.01345",
"transformers",
"deep-reinforcement-learning",
"reinforcement-learning",
"decision-transformer",
"gym-continous-control"
]
| reinforcement-learning | false | edbeeching | null | edbeeching/decision-transformer-gym-halfcheetah-medium-replay | 7 | null | transformers | 14,288 | ---
tags:
- deep-reinforcement-learning
- reinforcement-learning
- decision-transformer
- gym-continous-control
pipeline_tag: reinforcement-learning
---
# Decision Transformer model trained on medium-replay trajectories sampled from the Gym HalfCheetah environment
This is a trained [Decision Transformer](https://arxiv.org/abs/2106.01345) model trained on medium-replay trajectories sampled from the Gym HalfCheetah environment.
The following normlization coeficients are required to use this model:
mean = [-0.12880704, 0.37381196, -0.14995988, -0.23479079, -0.28412786, -0.13096535, -0.20157982, -0.06517727, 3.4768248, -0.02785066, -0.01503525, 0.07697279, 0.01266712, 0.0273253, 0.02316425, 0.01043872, -0.01583941]
std = [0.17019016, 1.2844249, 0.33442774, 0.36727592, 0.26092398, 0.4784107, 0.31814206 ,0.33552638, 2.0931616, 0.80374336, 1.9044334, 6.57321, 7.5728636, 5.0697494, 9.105554, 6.0856543, 7.253004, 5]
See our [Blog Post](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing), [Colab notebook](https://colab.research.google.com/drive/1K3UuajwoPY1MzRKNkONNRS3gS5DxZ-qF?usp=sharing) or [Example Script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/decision_transformer) for usage.
|
DrishtiSharma/poem-gen-t5-small_v1 | 57fb8474d571736e95ad237e8781c10738705559 | 2022-03-16T17:30:57.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | DrishtiSharma | null | DrishtiSharma/poem-gen-t5-small_v1 | 7 | null | transformers | 14,289 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: poem-gen-t5-small_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-t5-small_v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.5397 | 0.32 | 5000 | 3.3474 |
| 3.4107 | 0.63 | 10000 | 3.2260 |
| 3.3236 | 0.95 | 15000 | 3.1414 |
| 3.25 | 1.26 | 20000 | 3.0884 |
| 3.2055 | 1.58 | 25000 | 3.0461 |
| 3.1677 | 1.89 | 30000 | 3.0057 |
| 3.1189 | 2.21 | 35000 | 2.9786 |
| 3.0972 | 2.53 | 40000 | 2.9533 |
| 3.0855 | 2.84 | 45000 | 2.9318 |
| 3.0364 | 3.16 | 50000 | 2.9124 |
| 3.0125 | 3.47 | 55000 | 2.8962 |
| 2.9987 | 3.79 | 60000 | 2.8812 |
| 2.9734 | 4.1 | 65000 | 2.8675 |
| 2.9782 | 4.42 | 70000 | 2.8563 |
| 2.9492 | 4.74 | 75000 | 2.8446 |
| 2.9383 | 5.05 | 80000 | 2.8360 |
| 2.9322 | 5.37 | 85000 | 2.8250 |
| 2.9193 | 5.68 | 90000 | 2.8159 |
| 2.9119 | 6.0 | 95000 | 2.8095 |
| 2.8893 | 6.31 | 100000 | 2.8046 |
| 2.8927 | 6.63 | 105000 | 2.7975 |
| 2.8944 | 6.95 | 110000 | 2.7879 |
| 2.8568 | 7.26 | 115000 | 2.7856 |
| 2.8648 | 7.58 | 120000 | 2.7808 |
| 2.8534 | 7.89 | 125000 | 2.7737 |
| 2.8563 | 8.21 | 130000 | 2.7696 |
| 2.8387 | 8.52 | 135000 | 2.7664 |
| 2.8328 | 8.84 | 140000 | 2.7643 |
| 2.8137 | 9.16 | 145000 | 2.7615 |
| 2.8058 | 9.47 | 150000 | 2.7548 |
| 2.8138 | 9.79 | 155000 | 2.7547 |
| 2.8098 | 10.1 | 160000 | 2.7506 |
| 2.7944 | 10.42 | 165000 | 2.7479 |
| 2.809 | 10.73 | 170000 | 2.7443 |
| 2.7897 | 11.05 | 175000 | 2.7431 |
| 2.7955 | 11.37 | 180000 | 2.7403 |
| 2.793 | 11.68 | 185000 | 2.7403 |
| 2.798 | 12.0 | 190000 | 2.7351 |
| 2.7955 | 12.31 | 195000 | 2.7351 |
| 2.785 | 12.63 | 200000 | 2.7329 |
| 2.7701 | 12.94 | 205000 | 2.7329 |
| 2.7744 | 13.26 | 210000 | 2.7317 |
| 2.7827 | 13.58 | 215000 | 2.7295 |
| 2.7793 | 13.89 | 220000 | 2.7303 |
| 2.7782 | 14.21 | 225000 | 2.7298 |
| 2.7762 | 14.52 | 230000 | 2.7289 |
| 2.7719 | 14.84 | 235000 | 2.7292 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_mls_upd | d1c40faac8ccb8f4b1810dbc0e4df3575fbd8dab | 2022-03-16T13:13:22.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"dataset:xtreme_s",
"transformers",
"mls",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | anton-l | null | anton-l/xtreme_s_xlsr_mls_upd | 7 | null | transformers | 14,290 | ---
language:
- pl
license: apache-2.0
tags:
- mls
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
model-index:
- name: xtreme_s_xlsr_mls_upd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_mls_upd
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MLS.PL dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1489
- Wer: 1.0
- Cer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:---:|
| 3.4678 | 0.59 | 20 | 3.4581 | 1.0 | 1.0 |
| 3.1713 | 1.18 | 40 | 3.1816 | 1.0 | 1.0 |
| 3.134 | 1.76 | 60 | 3.1538 | 1.0 | 1.0 |
| 3.132 | 2.35 | 80 | 3.1411 | 1.0 | 1.0 |
| 3.1295 | 2.94 | 100 | 3.1373 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
ningkko/drug-stance-bert | 42e47b16591b86d423ef609327939d0b1c8aebf2 | 2022-04-30T17:29:17.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-classification | false | ningkko | null | ningkko/drug-stance-bert | 7 | 1 | transformers | 14,291 | ---
tags:
- generated_from_trainer
model-index:
- name: drug-stance-bert
results: [1, 0, 2]
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# drug-stance-bert
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on [COVID-CQ](https://github.com/eceveco/COVID-CQ), a dataset that contains 3-label annotated opinions (negative, neutral, and positive) of the tweet initiators regarding the use of Chloroquine or Hydroxychloroquine for the treatment or prevention of the coronavirus.
## Intended uses & limitations
Predict opinions (negative, neutral, and positive) of tweet initiators regarding the use of a drug for the treatment or prevention of the coronavirus. Note that having multiple drug names with different stances in a single tweet can confuse the model.
## Inference & understanding
We followed COVID-CQ to use the following label representation:
- 0 -> None/Neutral;
- 1 -> Against;
- 2 -> Favor
Try these examples:
- The gov's killing people by banning Ivm
- Great news cheers everybody:) ivermectin proven to not work by rct lol
## Tutorial
See our Github repo for [inference scripts](https://github.com/ningkko/COVID-drug/blob/main/stance_detection/inference.ipynb)
## Model description
"We developed two COVID-drug-stance RoBERTa-base models by fine-tuning a pre-trained Twitter-specific stance detection model on a stance data set called COVID-CQ. The data were divided into training-dev-test validation datasets with a 70:10:20 ratio. Model I (COVID-drug-stance-BERT) was trained on the original tweet data, and Model II (COVID-drug-stance-BERT-masked) was trained on tweets with drug names masked as “[mask]” for model generalizability on different drugs. The two models had similar performance on the COVID-19 validation set: COVID-drug-stance-BERT had an accuracy of 86.88%, and the masked model had an accuracy of 86.67%. The two models were then evaluated by predicting tweet initiators’ attitudes towards the drug mentioned in each tweet using randomly selected test sets (100 tweets) of each drug (Hydroxychloquine, Ivermectin, Molnupiravir, Remdesivir). As suggested by the evaluation in Table 2, Model I had better performance and was therefore used in this study".
| **Drug** | **Model I: Original Tweet** | | | **Model II: Drug Names Masked** | | |
|------------------------|:---------------------------:|:-----------:|:------------:|:-------------------------------:|:-----------:|:------------:|
| | **Precision** | **Recall** | **F1-Score** | **Precision** | **Recall** | **F1-Score** |
| **Hydroxychloroquine** | 0.93 | 0.92 | **0.92** | 0.84 | 0.83 | 0.83 |
| **Ivermectin** | 0.92 | 0.91 | **0.91** | 0.72 | 0.68 | 0.68 |
| **Molnupiravir** | 0.89 | 0.89 | **0.89** | 0.78 | 0.77 | 0.77 |
| **Remdesivir** | 0.82 | 0.79 | **0.79** | 0.70 | 0.66 | 0.66 |
The model uploaded here is Model I.
## Training and evaluation data
COVID-CQ
## Training procedure
See Github
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.11.0
- Pytorch 1.8.1+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
facebook/regnet-y-004 | 53c29f91f4e439bf87abc1bbb46bf5d8dcf73c3e | 2022-06-30T10:13:42.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-y-004 | 7 | null | transformers | 14,292 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-064 | b02b9bb9bfad0254e733f3bffd6b512fdb3692c0 | 2022-06-30T10:14:12.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-y-064 | 7 | null | transformers | 14,293 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-120 | 475446c34ff6aed51d0af467d04b1186300b8ab0 | 2022-06-30T10:23:09.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-y-120 | 7 | null | transformers | 14,294 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
facebook/regnet-y-320 | 8c8414e797f9a2d2a1fe4e2f3c434d2bbd141b08 | 2022-06-30T10:13:35.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-y-320 | 7 | null | transformers | 14,295 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
DeltaHub/Spelling_T5-lowrankadapter | 610bcf082974aea1d7893c45ea15904b37fa0c3a | 2022-03-20T00:40:52.000Z | [
"pytorch",
"transformers"
]
| null | false | DeltaHub | null | DeltaHub/Spelling_T5-lowrankadapter | 7 | null | transformers | 14,296 | Entry not found |
Aleksandar1932/gpt-neo-125M-hip-hop | cd905aac373860e30a36284ff0605eed052c777d | 2022-03-19T19:12:26.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
]
| text-generation | false | Aleksandar1932 | null | Aleksandar1932/gpt-neo-125M-hip-hop | 7 | null | transformers | 14,297 | Entry not found |
doctorlan/autonlp-JD-bert-653619233 | 67bb35e555aa4c9d265b2dcddd4065882ae9f3fe | 2022-03-21T08:54:10.000Z | [
"pytorch",
"bert",
"text-classification",
"unk",
"dataset:doctorlan/autonlp-data-JD-bert",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | doctorlan | null | doctorlan/autonlp-JD-bert-653619233 | 7 | null | transformers | 14,298 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-JD-bert
co2_eq_emissions: 5.919372931976555
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653619233
- CO2 Emissions (in grams): 5.919372931976555
## Validation Metrics
- Loss: 0.15083155035972595
- Accuracy: 0.952650883627876
- Precision: 0.9631399317406143
- Recall: 0.9412941961307538
- AUC: 0.9828776962419389
- F1: 0.9520917678812415
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-JD-bert-653619233
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN | f5e3896f611e6e774c206551423ef0d1752690d8 | 2022-03-21T22:07:55.000Z | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN | 7 | null | transformers | 14,299 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2276
- Precision: 0.8078
- Recall: 0.8258
- F1: 0.8167
- Accuracy: 0.9629
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0842 | 1.0 | 2719 | 0.1765 | 0.7606 | 0.7785 | 0.7695 | 0.9542 |
| 0.0392 | 2.0 | 5438 | 0.1971 | 0.7990 | 0.7958 | 0.7974 | 0.9596 |
| 0.0138 | 3.0 | 8157 | 0.2094 | 0.8013 | 0.8196 | 0.8103 | 0.9620 |
| 0.0082 | 4.0 | 10876 | 0.2276 | 0.8078 | 0.8258 | 0.8167 | 0.9629 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.