modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 06:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
dbarbedillo/a2c-AntBulletEnv-v0 | dbarbedillo | 2022-07-27T22:25:58Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T22:24:45Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1748.24 +/- 84.28
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mariastull/Reinforce-3 | mariastull | 2022-07-27T21:39:59Z | 0 | 0 | null | [
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T21:39:47Z | ---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-3
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
ejin/bert-base-cased-finetuned-ner | ejin | 2022-07-27T21:16:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-26T20:04:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8940432730834298
- name: Recall
type: recall
value: 0.9008612955320294
- name: F1
type: f1
value: 0.8974393350315055
- name: Accuracy
type: accuracy
value: 0.9749955848590098
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0919
- Precision: 0.8940
- Recall: 0.9009
- F1: 0.8974
- Accuracy: 0.9750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1147 | 1.0 | 1756 | 0.0919 | 0.8940 | 0.9009 | 0.8974 | 0.9750 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Tianzhou/finbert-pretrain | Tianzhou | 2022-07-27T20:43:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain",
"pre-trained",
"finbert",
"unk",
"arxiv:2006.08097",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-19T08:56:58Z | ---
tags:
- autotrain
- pre-trained
- finbert
- fill-mask
language: unk
widget:
- text: Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise.
- text: Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron
- text: U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week.
---
`FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice.
### Pre-training
It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens.
- Corporate Reports 10-K & 10-Q: 2.5B tokens
- Earnings Call Transcripts: 1.3B tokens
- Analyst Reports: 1.1B tokens
The entire training is done using an **NVIDIA DGX-1** machine. The server has 4 Tesla P100 GPUs, providing a total of 128 GB of GPU memory. This machine enables us to train the BERT models using a batch size of 128. We utilize Horovord framework for multi-GPU training. Overall, the total time taken to perform pretraining for one model is approximately **2 days**.
More details on `FinBERT`'s pre-training process can be found at: https://arxiv.org/abs/2006.08097
`FinBERT` can be further fine-tuned on downstream tasks. Specifically, we have fine-tuned `FinBERT` on an analyst sentiment classification task, and the fine-tuned model is shared at [https://huggingface.co/demo-org/auditor_review_model](https://huggingface.co/demo-org/auditor_review_model)
### Usage
Load the model directly from Transformers:
```
from transformers import AutoModelForMaskedLM
model = AutoModelForMaskedLM.from_pretrained("demo-org/finbert-pretrain", use_auth_token=True)
```
### Questions
Please contact the Data Science COE if you have more questions about this pre-trained model
### Demo Model
This model card is for demo purposes. The original model card for this model is [https://huggingface.co/yiyanghkust/finbert-pretrain](https://huggingface.co/yiyanghkust/finbert-pretrain). |
cjdentra/distilbert-base-uncased-finetuned-emotion | cjdentra | 2022-07-27T20:38:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-27T20:18:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
unclearsoup/creative | unclearsoup | 2022-07-27T20:00:32Z | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
]
| null | 2022-07-27T19:58:27Z | ---
license: cc-by-4.0
---
import requests
API_URL = "https://api-inference.huggingface.co/models/bigscience/bloom"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json() |
AriakimTaiyo/gpt2-chat | AriakimTaiyo | 2022-07-27T19:36:22Z | 61 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"gpt2",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-27T19:15:28Z | ---
language: en
license: mit
tags:
- conversational
---
# GPT-2 Large
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-author)
## Model Details
**Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"},
{'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"},
{'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"},
{'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"},
{'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = TFGPT2Model.from_pretrained('gpt2-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2-large')
>>> set_seed(42)
>>> generator("The man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The man worked as a security guard in a hotel'},
{'generated_text': 'The man worked as a salesman in Mexico and in'},
{'generated_text': 'The man worked as a supervisor at the warehouse for'},
{'generated_text': "The man worked as a cleaner for the store's"},
{'generated_text': 'The man worked as a barbershop apprentice.'}]
>>> set_seed(42)
>>> generator("The woman worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The woman worked as a clerk at the bank.'},
{'generated_text': 'The woman worked as a caregiver, and her'},
{'generated_text': 'The woman worked as a customer service agent for a'},
{'generated_text': 'The woman worked as a cleaner at the store,'},
{'generated_text': 'The woman worked as a barista and was "'}]
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575|
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Unknown
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team. |
mariastull/Reinforce-2 | mariastull | 2022-07-27T19:17:16Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T19:16:19Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
kabelomalapane/En-Af_update | kabelomalapane | 2022-07-27T18:17:15Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-07-27T16:11:00Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: En-Af_update
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Af_update
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-af](https://huggingface.co/Helsinki-NLP/opus-mt-en-af) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8089
- Bleu: 45.1780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.4243 | 1.0 | 2553 | 1.8451 | 42.1314 |
| 1.0987 | 2.0 | 5106 | 1.7509 | 44.0714 |
| 0.9329 | 3.0 | 7659 | 1.7340 | 44.6003 |
| 0.8365 | 4.0 | 10212 | 1.7260 | 44.7820 |
| 0.7556 | 5.0 | 12765 | 1.7590 | 45.1180 |
| 0.6944 | 6.0 | 15318 | 1.7715 | 45.1451 |
| 0.652 | 7.0 | 17871 | 1.7696 | 45.1025 |
| 0.6132 | 8.0 | 20424 | 1.8060 | 45.1781 |
| 0.5832 | 9.0 | 22977 | 1.8135 | 45.2485 |
| 0.5602 | 10.0 | 25530 | 1.8089 | 45.1730 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
heriosousa/a2c-AntBulletEnv-v0 | heriosousa | 2022-07-27T17:03:12Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T17:02:08Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1020.71 +/- 201.31
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Evelyn18/roberta-base-spanish-squades-becasIncentivos4 | Evelyn18 | 2022-07-27T16:52:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-07-27T15:56:33Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasIncentivos4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becasIncentivos4
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 11 | 1.8136 |
| No log | 2.0 | 22 | 1.7734 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mariastull/Reinforce-1 | mariastull | 2022-07-27T16:29:13Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T16:29:03Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- metrics:
- type: mean_reward
value: 11.90 +/- 1.81
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Go2Heart/BERT_Mod_1 | Go2Heart | 2022-07-27T16:17:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-27T16:07:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: BERT_Mod_1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541934635424655
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Mod_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1787
- Matthews Correlation: 0.5419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1616 | 1.0 | 535 | 0.9278 | 0.4979 |
| 0.1128 | 2.0 | 1070 | 1.0487 | 0.5046 |
| 0.0712 | 3.0 | 1605 | 1.0155 | 0.5306 |
| 0.0952 | 4.0 | 2140 | 1.1860 | 0.5147 |
| 0.0698 | 5.0 | 2675 | 1.1787 | 0.5419 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
annahaz/xlm-roberta-base-finetuned-misogyny-sexism | annahaz | 2022-07-27T14:45:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-05T19:00:29Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: xlm-roberta-base-finetuned-misogyny-sexism
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-misogyny-sexism
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9064
- Accuracy: 0.8334
- F1: 0.3322
- Precision: 0.2498
- Recall: 0.4961
- Mae: 0.1666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.3869 | 1.0 | 2395 | 0.2905 | 0.8778 | 0.3528 | 0.3164 | 0.3988 | 0.1222 |
| 0.3539 | 2.0 | 4790 | 0.4143 | 0.8278 | 0.3465 | 0.2536 | 0.5467 | 0.1722 |
| 0.3124 | 3.0 | 7185 | 0.3327 | 0.8568 | 0.3583 | 0.2864 | 0.4786 | 0.1432 |
| 0.2817 | 4.0 | 9580 | 0.5621 | 0.7329 | 0.3092 | 0.1972 | 0.7160 | 0.2671 |
| 0.2651 | 5.0 | 11975 | 0.4376 | 0.8520 | 0.3607 | 0.2821 | 0.5 | 0.1480 |
| 0.2249 | 6.0 | 14370 | 0.5581 | 0.8326 | 0.3312 | 0.2485 | 0.4961 | 0.1674 |
| 0.1958 | 7.0 | 16765 | 0.6728 | 0.8382 | 0.3234 | 0.2484 | 0.4630 | 0.1618 |
| 0.1899 | 8.0 | 19160 | 0.7404 | 0.8304 | 0.3316 | 0.2471 | 0.5039 | 0.1696 |
| 0.1619 | 9.0 | 21555 | 0.8309 | 0.8461 | 0.3382 | 0.2639 | 0.4708 | 0.1539 |
| 0.1453 | 10.0 | 23950 | 0.9064 | 0.8334 | 0.3322 | 0.2498 | 0.4961 | 0.1666 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
butchland/rl-ppo-LunarLander-v2 | butchland | 2022-07-27T14:27:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-26T12:54:26Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 283.83 +/- 24.49
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
suvadityamuk/q-Taxi-v3 | suvadityamuk | 2022-07-27T14:25:23Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T14:25:17Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.52 +/- 2.77
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="suvadityamuk/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
masterdezign/Reinforce-Pixelcopter-PLE-v0 | masterdezign | 2022-07-27T13:13:47Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T13:13:39Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- metrics:
- type: mean_reward
value: 11.30 +/- 10.25
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Kamrani/t5-large | Kamrani | 2022-07-27T13:13:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"en",
"fa",
"arxiv:1805.12471",
"arxiv:1708.00055",
"arxiv:1704.05426",
"arxiv:1606.05250",
"arxiv:1808.09121",
"arxiv:1810.12885",
"arxiv:1905.10044",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| translation | 2022-07-27T08:35:34Z | ---
language:
- en
- fa
tags:
- translation
license: apache-2.0
---
# Model Card for T5 Large

# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Citation](#citation)
8. [Model Card Authors](#model-card-authors)
9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Large is the checkpoint with 770 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- **Model type:** Language model
- **Language(s) (NLP):** English, French, Romanian, German
- **License:** Apache 2.0
- **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
- **Resources for more information:**
- [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
- [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
# Uses
## Direct Use and Downstream Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
More information needed.
## Recommendations
More information needed.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
Thereby, the following datasets were being used for (1.) and (2.):
1. **Datasets used for Unsupervised denoising objective**:
- [C4](https://huggingface.co/datasets/c4)
- [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
2. **Datasets used for Supervised text-to-text language modeling objective**
- Sentence acceptability judgment
- CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
- Sentiment analysis
- SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- Paraphrasing/sentence similarity
- MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
- STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
- QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
- Natural language inference
- MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
- QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
- RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
- CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
- Sentence completion
- COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
- Word sense disambiguation
- WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
- Question answering
- MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
- ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
- BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
## Training Procedure
In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
> In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
## Results
For full results for T5-Large, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Citation
**BibTeX:**
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained("t5-large")
model = T5Model.from_pretrained("t5-large")
input_ids = tokenizer(
"Studies have been shown that owning a dog is good for you", return_tensors="pt"
).input_ids # Batch size 1
decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
|
sagteam/covid-twitter-xlm-roberta-large | sagteam | 2022-07-27T11:41:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"arxiv:1911.02116",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | # COVID-twitter-XLM-Roberta-large
## Model description
This is a model based on the [XLM-RoBERTa large](https://huggingface.co/xlm-roberta-large) topology (provided by Facebook, see original [paper](https://arxiv.org/abs/1911.02116)) with additional training on a corpus of unmarked tweets.
For more details, please see, our [GitHub repository](https://github.com/sag111/COVID-19-tweets-Russia).
## Training data
We formed a corpus of unlabeled twitter messages.
The data on keyword "covid" was expanded with texts containing other words often occurred in hashtags on the Covid-19 pandemic: "covid", "stayhome", and "coronavirus" (hereinafter, these are translations of Russian words into English).
Separately, messages were collected from Twitter users from large regions of Russia. The search was provided using different word forms of 58 manually selected keywords on Russian related to the topic of coronavirus infection (including: "PCR", "pandemic", "self-isolation", etc.).
The unlabeled corpus includes all unique Russian-language tweets from the collected data (>1M tweets). Since modern language models are usually multilingual, about 1M more tweets in other languages were added to this corpus using filtering procedures described above. Thus, in the unlabeled part of the collected data, there were about 2 million messages.
### BibTeX entry and citation info
Our GitHub repository: https://github.com/sag111/COVID-19-tweets-Russia
If you have found our results helpful in your work, feel free to cite our publication and this repository as:
```
@article{sboev2021russian,
title={The Russian language corpus and a neural network to analyse Internet tweet reports about Covid-19},
author={Sboev, Alexander and Moloshnikov, Ivan and Naumov, Alexander and Levochkina𝑎, Anastasia and Rybka𝑎, Roman},
year={2021}
}
```
|
ai4bharat/IndicBERTv2-alpha-POS-tagging | ai4bharat | 2022-07-27T11:23:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-22T13:46:31Z | # IndicXLMv2-alpha-POS-tagging
|
ai4bharat/IndicBERTv2-alpha-SentimentClassification | ai4bharat | 2022-07-27T11:22:06Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-22T13:29:28Z | # IndicXLMv2-alpha-SentimentClassification
|
robingeibel/reformer-big_patent-16384 | robingeibel | 2022-07-27T11:06:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"reformer",
"fill-mask",
"generated_from_trainer",
"dataset:big_patent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-26T05:39:57Z | ---
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: reformer-big_patent-16384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reformer-big_patent-16384
This model was trained from scratch on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.0379 | 1.0 | 17732 | 6.0935 |
| 5.9941 | 2.0 | 35464 | 6.0363 |
| 5.9831 | 3.0 | 53196 | 6.0565 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/jordo4today-paddedpossum-wrenfing | huggingtweets | 2022-07-27T10:16:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-27T10:15:48Z | ---
language: en
thumbnail: http://www.huggingtweets.com/jordo4today-paddedpossum-wrenfing/1658916978297/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1538409928943083526/gilLk6Ju_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1381760254799716353/bNTnf-3w_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1546006810754260992/Dk6vMJU3_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Mr. Wolf Simp & Zoinks & Jordo 🔜 MFF</div>
<div style="text-align: center; font-size: 14px;">@jordo4today-paddedpossum-wrenfing</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Mr. Wolf Simp & Zoinks & Jordo 🔜 MFF.
| Data | Mr. Wolf Simp | Zoinks | Jordo 🔜 MFF |
| --- | --- | --- | --- |
| Tweets downloaded | 3203 | 742 | 3244 |
| Retweets | 2858 | 90 | 636 |
| Short tweets | 135 | 37 | 243 |
| Tweets kept | 210 | 615 | 2365 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2e01we01/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jordo4today-paddedpossum-wrenfing's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wh0na3g) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wh0na3g/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jordo4today-paddedpossum-wrenfing')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DigitalUmuganda/joeynmt-en-kin | DigitalUmuganda | 2022-07-27T08:50:17Z | 0 | 0 | null | [
"doi:10.57967/hf/0054",
"region:us"
]
| null | 2022-07-25T10:34:05Z | # English-to-Kinyarwanda Machine Translation
This model is an English-to-Kinyarwanda machine translation model, it was built and trained using JoeyNMT framework. The translation model uses transformer encoder-decoder based architecture. It was trained on a 47,211 long English-Kinyarwanda bitext dataset prepared by Digital Umuganda.
## Model architecture
**Encoder && Decoder**
> Type: Transformer
Num_layer: 6
Num_heads: 8
Embedding_dim: 256
ff_size: 1024
Dropout: 0.1
Layer_norm: post
Initializer: xavier
Total params: 12563968
## Pre-processing
Tokenizer_type: subword-nmt
num_merges: 4000
BPE encoding learned on the bitext, separate vocabularies for each language
Pretokenizer: None
No lowercase applied
## Training
Optimizer: Adam
Loss: crossentropy
Epochs: 30
Batch_size: 256
Number of GPUs: 1
## Evaluation
Evaluation_metrics: Blue_score, chrf
Tokenization: None
Beam_width: 15
Beam_alpha: 1.0
## Tools
* joeyNMT 2.0.0
* datasets
* pandas
* numpy
* transformers
* sentencepiece
* pytorch(with cuda)
* sacrebleu
* protobuf>=3.20.1
## How to train
[Use the following link for more information](https://github.com/joeynmt/joeynmt)
## Translation
To install joeyNMT run:
>$ git clone https://github.com/joeynmt/joeynmt.git
$ cd joeynmt
$ pip install . -e
Interactive translation(stdin):
>$ python -m joeynmt translate args.yaml
File translation:
>$ python -m joeynmt translate args.yaml < src_lang.txt > hypothesis_trg_lang.txt
## Accuracy measurement
Sacrebleu installation:
> $ pip install sacrebleu
Measurement(bleu_score, chrf):
> $ sacrebleu reference.tsv -i hypothesis.tsv -m bleu chrf
## To-do
>* Test the model using differenct datasets including the jw300
>* Use the Digital Umuganda dataset on some of the available State Of The Art(SOTA) available models.
>* Expand the dataset
## Result
The following result were obtained on using the sacrebleu.
English-to-Kinyarwanda:
>Blue: 56.5
Chrf: 75.2
|
Billwzl/20split_dataset_version2 | Billwzl | 2022-07-27T08:07:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-25T06:01:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset_version2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset_version2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7621 | 1.0 | 11851 | 2.5216 |
| 2.5466 | 2.0 | 23702 | 2.4157 |
| 2.4505 | 3.0 | 35553 | 2.3592 |
| 2.3798 | 4.0 | 47404 | 2.3028 |
| 2.3178 | 5.0 | 59255 | 2.2768 |
| 2.272 | 6.0 | 71106 | 2.2366 |
| 2.2323 | 7.0 | 82957 | 2.2128 |
| 2.1928 | 8.0 | 94808 | 2.1797 |
| 2.157 | 9.0 | 106659 | 2.1667 |
| 2.1292 | 10.0 | 118510 | 2.1392 |
| 2.0978 | 11.0 | 130361 | 2.1280 |
| 2.0725 | 12.0 | 142212 | 2.1106 |
| 2.052 | 13.0 | 154063 | 2.0944 |
| 2.0268 | 14.0 | 165914 | 2.0804 |
| 2.0121 | 15.0 | 177765 | 2.0698 |
| 1.9997 | 16.0 | 189616 | 2.0626 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
wooihen/xlm-roberta-base-finetuned-panx-de-fr | wooihen | 2022-07-27T07:25:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-27T06:57:57Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/stephenking | huggingtweets | 2022-07-27T06:45:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/stephenking/1658904308336/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/378800000836981162/b683f7509ec792c3e481ead332940cdc_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Stephen King</div>
<div style="text-align: center; font-size: 14px;">@stephenking</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Stephen King.
| Data | Stephen King |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 770 |
| Short tweets | 205 |
| Tweets kept | 2255 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3c83ql6r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stephenking's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/llolipvn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/llolipvn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stephenking')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
wenkai-li/distilroberta-base-finetuned-marktextepoch_35 | wenkai-li | 2022-07-27T06:17:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-27T02:56:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-marktextepoch_35
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-marktextepoch_35
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5158 | 1.0 | 1500 | 2.3385 |
| 2.4312 | 2.0 | 3000 | 2.2620 |
| 2.3563 | 3.0 | 4500 | 2.2279 |
| 2.3249 | 4.0 | 6000 | 2.2165 |
| 2.2515 | 5.0 | 7500 | 2.2246 |
| 2.2178 | 6.0 | 9000 | 2.1714 |
| 2.1822 | 7.0 | 10500 | 2.1461 |
| 2.1501 | 8.0 | 12000 | 2.1388 |
| 2.1342 | 9.0 | 13500 | 2.1085 |
| 2.1141 | 10.0 | 15000 | 2.1090 |
| 2.0833 | 11.0 | 16500 | 2.1130 |
| 2.0769 | 12.0 | 18000 | 2.0969 |
| 2.0474 | 13.0 | 19500 | 2.0823 |
| 2.0364 | 14.0 | 21000 | 2.0893 |
| 2.0269 | 15.0 | 22500 | 2.0501 |
| 1.9814 | 16.0 | 24000 | 2.0667 |
| 1.9716 | 17.0 | 25500 | 2.0570 |
| 1.9611 | 18.0 | 27000 | 2.0530 |
| 1.9557 | 19.0 | 28500 | 2.0590 |
| 1.9443 | 20.0 | 30000 | 2.0381 |
| 1.9229 | 21.0 | 31500 | 2.0433 |
| 1.9192 | 22.0 | 33000 | 2.0468 |
| 1.8865 | 23.0 | 34500 | 2.0361 |
| 1.914 | 24.0 | 36000 | 2.0412 |
| 1.867 | 25.0 | 37500 | 2.0165 |
| 1.8724 | 26.0 | 39000 | 2.0152 |
| 1.8644 | 27.0 | 40500 | 2.0129 |
| 1.8685 | 28.0 | 42000 | 2.0183 |
| 1.8458 | 29.0 | 43500 | 2.0082 |
| 1.8653 | 30.0 | 45000 | 1.9939 |
| 1.8584 | 31.0 | 46500 | 2.0015 |
| 1.8396 | 32.0 | 48000 | 1.9924 |
| 1.8399 | 33.0 | 49500 | 2.0102 |
| 1.8363 | 34.0 | 51000 | 1.9946 |
| 1.83 | 35.0 | 52500 | 1.9908 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sheikh/layoutlmv2-finetuned-SLR-test | sheikh | 2022-07-27T06:09:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-27T05:47:00Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-SLR-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-SLR-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 8
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mlegls/codeparrot-ds | mlegls | 2022-07-27T06:02:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-27T00:42:34Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.0659 | 0.34 | 5000 | 3.9176 |
| 1.8404 | 0.67 | 10000 | 3.7958 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ibraheemmoosa/xlmindic-base-uniscript | ibraheemmoosa | 2022-07-27T05:37:04Z | 20 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"albert",
"pretraining",
"multilingual",
"masked-language-modeling",
"sentence-order-prediction",
"fill-mask",
"xlmindic",
"nlp",
"indoaryan",
"indicnlp",
"iso15919",
"transliteration",
"as",
"bn",
"gu",
"hi",
"mr",
"ne",
"or",
"pa",
"si",
"sa",
"bpy",
"mai",
"bh",
"gom",
"dataset:oscar",
"license:apache-2.0",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language:
- as
- bn
- gu
- hi
- mr
- ne
- or
- pa
- si
- sa
- bpy
- mai
- bh
- gom
license: apache-2.0
datasets:
- oscar
tags:
- multilingual
- albert
- masked-language-modeling
- sentence-order-prediction
- fill-mask
- xlmindic
- nlp
- indoaryan
- indicnlp
- iso15919
- transliteration
widget:
- text : 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'
co2_eq_emissions:
emissions: 28.53
source: "calculated using this webstie https://mlco2.github.io/impact/#compute"
training_type: "pretraining"
geographical_location: "NA"
hardware_used: "TPUv3-8 for about 180 hours or 7.5 days"
---
# XLMIndic Base Uniscript
This model is pretrained on a subset of the [OSCAR](https://huggingface.co/datasets/oscar) corpus spanning 14 Indo-Aryan languages. **Before pretraining this model we transliterate the text to [ISO-15919](https://en.wikipedia.org/wiki/ISO_15919) format using the [Aksharamukha](https://pypi.org/project/aksharamukha/)
library.** A demo of Aksharamukha library is hosted [here](https://aksharamukha.appspot.com/converter)
where you can transliterate your text and use it on our model on the inference widget.
## Model description
This model has the same configuration as the [ALBERT Base v2 model](https://huggingface.co/albert-base-v2/). Specifically, this model has the following configuration:
- 12 repeating layers
- 128 embedding dimension
- 768 hidden dimension
- 12 attention heads
- 11M parameters
- 512 sequence length
## Training data
This model was pretrained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset which is a medium sized multilingual corpus containing text from 163 languages. We select a subset of 14 languages based on the following criteria:
- Belongs to the [Indo-Aryan language family](https://en.wikipedia.org/wiki/Indo-Aryan_languages).
- Uses a [Brahmic script](https://en.wikipedia.org/wiki/Brahmic_scripts).
These are the 14 languages we pretrain this model on:
- Assamese
- Bangla
- Bihari
- Bishnupriya Manipuri
- Goan Konkani
- Gujarati
- Hindi
- Maithili
- Marathi
- Nepali
- Oriya
- Panjabi
- Sanskrit
- Sinhala
## Transliteration
*The unique component of this model is that it takes in ISO-15919 transliterated text.*
The motivation behind this is this. When two languages share vocabularies, a machine learning model can exploit that to learn good cross-lingual representations. However if these two languages use different writing scripts it is difficult for a model to make the connection. Thus if if we can write the two languages in a single script then it is easier for the model to learn good cross-lingual representation.
For many of the scripts currently in use, there are standard transliteration schemes to convert to the Latin script. In particular, for the Indic scripts the ISO-15919 transliteration scheme is designed to consistently transliterate texts written in different Indic scripts to the Latin script.
An example of ISO-15919 transliteration for a piece of **Bangla** text is the following:
**Original:** "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি কবি, ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক।"
**Transliterated:** 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika.'
Another example for a piece of **Hindi** text is the following:
**Original:** "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
**Transliterated:** "cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
## Training procedure
### Preprocessing
The texts are transliterated to ISO-15919 format using the Aksharamukha library. Then these are tokenized using SentencePiece and a vocabulary size of 50,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
### Training
Training objective is the same as the original ALBERT.
.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
The details of the sentence order prediction example generation procedure for each sentence are the following:
- Split the sentence into two parts A and B at a random index.
- With 50% probability swap the two parts.
The model was pretrained on TPUv3-8 for 1M steps. We have checkpoints available at every 100k pretraining steps. These are available at different branches of this repository. You can load these checkpoints by passing the `revision` parameter. For example to load the checkpoint at 500k you can use the following code.
```python
>>> AutoModel.from_pretrained('ibraheemmoosa/xlmindic-base-uniscript', revision='checkpoint_500k')
```
## Evaluation results
We evaluated this model on the Indo-Aryan subset of languages (Panjabi, Oriya, Assamese, Bangla, Hindi, Marathi, Gujarati) from the [IndicGLUE](https://huggingface.co/datasets/indic_glue) benchmark dataset. We report the mean and standard deviation of nine fine-tuning runs for this model. We compare with an [ablation model](https://huggingface.co/ibraheemmoosa/xlmindic-base-multiscript) that do not use transliteration and is instead trained on original scripts.
### IndicGLUE
Task | mBERT | XLM-R | IndicBERT-Base | XLMIndic-Base-Uniscript (This Model) | XLMIndic-Base-Multiscript (Ablation Model)
-----| ----- | ----- | ------ | ------- | --------
Wikipedia Section Title Prediction | 71.90 | 65.45 | 69.40 | **81.78 ± 0.60** | 77.17 ± 0.76
Article Genre Classification | 88.64 | 96.61 | 97.72 | **98.70 ± 0.29** | 98.30 ± 0.26
Named Entity Recognition (F1-score) | 71.29 | 62.18 | 56.69 | **89.85 ± 1.14** | 83.19 ± 1.58
BBC Hindi News Article Classification | 60.55 | 75.52 | 74.60 | **79.14 ± 0.60** | 77.28 ± 1.50
Soham Bangla News Article Classification | 80.23 | 87.6 | 78.45 | **93.89 ± 0.48** | 93.22 ± 0.49
INLTK Gujarati Headlines Genre Classification | - | - | **92.91** | 90.73 ± 0.75 | 90.41 ± 0.69
INLTK Marathi Headlines Genre Classification | - | - | **94.30** | 92.04 ± 0.47 | 92.21 ± 0.23
IITP Hindi Product Reviews Sentiment Classification | 74.57 | **78.97** | 71.32 | 77.18 ± 0.77 | 76.33 ± 0.84
IITP Hindi Movie Reviews Sentiment Classification | 56.77 | 61.61 | 59.03 | **66.34 ± 0.16** | 65.91 ± 2.20
MIDAS Hindi Discourse Type Classification | 71.20 | **79.94** | 78.44 | 78.54 ± 0.91 | 78.39 ± 0.33
Cloze Style Question Answering (Fill-mask task) | - | - | 37.16 | **41.54** | 38.21
## Intended uses & limitations
This model is pretrained on Indo-Aryan languages. Thus it is intended to be used for downstream tasks on these languages. However, since Dravidian languages such as Malayalam, Telegu, Kannada etc share a lot of vocabulary with the Indo-Aryan languages, this model can potentially be used on those languages too (after transliterating the text to ISO-15919).
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=xlmindic) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
To use this model you will need to first install the [Aksharamukha](https://pypi.org/project/aksharamukha/) library.
```bash
pip install aksharamukha
```
Using this library you can transliterate any text wriiten in Indic scripts in the following way:
```python
>>> from aksharamukha import transliterate
>>> text = "चूंकि मानव परिवार के सभी सदस्यों के जन्मजात गौरव और समान तथा अविच्छिन्न अधिकार की स्वीकृति ही विश्व-शान्ति, न्याय और स्वतन्त्रता की बुनियाद है"
>>> transliterated_text = transliterate.process('autodetect', 'ISO', text)
>>> transliterated_text
"cūṁki mānava parivāra kē sabhī sadasyōṁ kē janmajāta gaurava aura samāna tathā avicchinna adhikāra kī svīkr̥ti hī viśva-śānti, nyāya aura svatantratā kī buniyāda hai"
```
Then you can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> from aksharamukha import transliterate
>>> unmasker = pipeline('fill-mask', model='ibraheemmoosa/xlmindic-base-uniscript')
>>> text = "রবীন্দ্রনাথ ঠাকুর এফআরএএস (৭ মে ১৮৬১ - ৭ আগস্ট ১৯৪১; ২৫ বৈশাখ ১২৬৮ - ২২ শ্রাবণ ১৩৪৮ বঙ্গাব্দ) ছিলেন অগ্রণী বাঙালি [MASK], ঔপন্যাসিক, সংগীতস্রষ্টা, নাট্যকার, চিত্রকর, ছোটগল্পকার, প্রাবন্ধিক, অভিনেতা, কণ্ঠশিল্পী ও দার্শনিক। ১৯১৩ সালে গীতাঞ্জলি কাব্যগ্রন্থের ইংরেজি অনুবাদের জন্য তিনি এশীয়দের মধ্যে সাহিত্যে প্রথম নোবেল পুরস্কার লাভ করেন।"
>>> transliterated_text = transliterate.process('Bengali', 'ISO', text)
>>> transliterated_text
'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli [MASK], aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama [MASK] puraskāra lābha karēna.'
>>> unmasker(transliterated_text)
[{'score': 0.39705055952072144,
'token': 1500,
'token_str': 'abhinētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli abhinētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.20499080419540405,
'token': 3585,
'token_str': 'kabi',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kabi, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.1314290314912796,
'token': 15402,
'token_str': 'rājanētā',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli rājanētā, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.060830358415842056,
'token': 3212,
'token_str': 'kalākāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli kalākāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'},
{'score': 0.035522934049367905,
'token': 11586,
'token_str': 'sāhityakāra',
'sequence': 'rabīndranātha ṭhākura ēphaāraēēsa (7 mē 1861 - 7 āgasṭa 1941; 25 baiśākha 1268 - 22 śrābaṇa 1348 baṅgābda) chilēna agraṇī bāṅāli sāhityakāra, aupanyāsika, saṁgītasraṣṭā, nāṭyakāra, citrakara, chōṭagalpakāra, prābandhika, abhinētā, kaṇṭhaśilpī ō dārśanika. 1913 sālē gītāñjali kābyagranthēra iṁrēji anubādēra janya tini ēśīẏadēra madhyē sāhityē prathama nōbēla puraskāra lābha karēna.'}]
```
### Limitations and bias
Even though we pretrain on a comparatively large multilingual corpus the model may exhibit harmful gender, ethnic and political bias. If you fine-tune this model on a task where these issues are important you should take special care when relying on the model to make decisions.
## Contact
Feel free to contact us if you have any ideas or if you want to know more about our models.
- Ibraheem Muhammad Moosa ([email protected])
- Mahmud Elahi Akhter ([email protected])
- Ashfia Binte Habib
## BibTeX entry and citation info
```bibtex
@article{Moosa2022DoesTH,
title={Does Transliteration Help Multilingual Language Modeling?},
author={Ibraheem Muhammad Moosa and Mahmuda Akhter and Ashfia Binte Habib},
journal={ArXiv},
year={2022},
volume={abs/2201.12501}
}
``` |
RajSang/q-Taxi-v3 | RajSang | 2022-07-27T03:38:14Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-27T03:38:05Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.48 +/- 2.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="RajSang/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
gary109/ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53 | gary109 | 2022-07-27T03:22:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-22T00:21:50Z | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_singing3_ft_pretrain2_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4279
- Wer: 1.0087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.209 | 1.0 | 72 | 2.5599 | 0.9889 |
| 1.3395 | 2.0 | 144 | 2.7188 | 0.9877 |
| 1.2695 | 3.0 | 216 | 2.9989 | 0.9709 |
| 1.2818 | 4.0 | 288 | 3.2352 | 0.9757 |
| 1.2389 | 5.0 | 360 | 3.6867 | 0.9783 |
| 1.2368 | 6.0 | 432 | 3.3189 | 0.9811 |
| 1.2307 | 7.0 | 504 | 3.0786 | 0.9657 |
| 1.2607 | 8.0 | 576 | 2.9720 | 0.9677 |
| 1.2584 | 9.0 | 648 | 2.5613 | 0.9702 |
| 1.2266 | 10.0 | 720 | 2.6937 | 0.9610 |
| 1.262 | 11.0 | 792 | 3.9060 | 0.9745 |
| 1.2361 | 12.0 | 864 | 3.6138 | 0.9718 |
| 1.2348 | 13.0 | 936 | 3.4838 | 0.9745 |
| 1.2715 | 14.0 | 1008 | 3.3128 | 0.9751 |
| 1.2505 | 15.0 | 1080 | 3.2015 | 0.9710 |
| 1.211 | 16.0 | 1152 | 3.4709 | 0.9709 |
| 1.2067 | 17.0 | 1224 | 3.0566 | 0.9673 |
| 1.2536 | 18.0 | 1296 | 2.5479 | 0.9789 |
| 1.2297 | 19.0 | 1368 | 2.8307 | 0.9710 |
| 1.1949 | 20.0 | 1440 | 3.4112 | 0.9777 |
| 1.2181 | 21.0 | 1512 | 2.6784 | 0.9682 |
| 1.195 | 22.0 | 1584 | 3.0395 | 0.9639 |
| 1.2047 | 23.0 | 1656 | 3.1935 | 0.9726 |
| 1.2306 | 24.0 | 1728 | 3.2649 | 0.9723 |
| 1.199 | 25.0 | 1800 | 3.1378 | 0.9645 |
| 1.1945 | 26.0 | 1872 | 2.8143 | 0.9596 |
| 1.19 | 27.0 | 1944 | 3.5174 | 0.9787 |
| 1.1976 | 28.0 | 2016 | 2.9666 | 0.9594 |
| 1.2229 | 29.0 | 2088 | 2.8672 | 0.9589 |
| 1.1548 | 30.0 | 2160 | 2.6568 | 0.9627 |
| 1.169 | 31.0 | 2232 | 2.8799 | 0.9654 |
| 1.1857 | 32.0 | 2304 | 2.8691 | 0.9625 |
| 1.1862 | 33.0 | 2376 | 2.8251 | 0.9555 |
| 1.1721 | 34.0 | 2448 | 3.5968 | 0.9726 |
| 1.1293 | 35.0 | 2520 | 3.4130 | 0.9651 |
| 1.1513 | 36.0 | 2592 | 2.8804 | 0.9630 |
| 1.1537 | 37.0 | 2664 | 2.5824 | 0.9575 |
| 1.1818 | 38.0 | 2736 | 2.8443 | 0.9613 |
| 1.1835 | 39.0 | 2808 | 2.6431 | 0.9619 |
| 1.1457 | 40.0 | 2880 | 2.9254 | 0.9639 |
| 1.1591 | 41.0 | 2952 | 2.8194 | 0.9561 |
| 1.1284 | 42.0 | 3024 | 2.6432 | 0.9806 |
| 1.1602 | 43.0 | 3096 | 2.4279 | 1.0087 |
| 1.1556 | 44.0 | 3168 | 2.5040 | 1.0030 |
| 1.1256 | 45.0 | 3240 | 3.1641 | 0.9608 |
| 1.1256 | 46.0 | 3312 | 2.9522 | 0.9677 |
| 1.1211 | 47.0 | 3384 | 2.6318 | 0.9580 |
| 1.1142 | 48.0 | 3456 | 2.7298 | 0.9533 |
| 1.1237 | 49.0 | 3528 | 2.5442 | 0.9673 |
| 1.0976 | 50.0 | 3600 | 2.7767 | 0.9610 |
| 1.1154 | 51.0 | 3672 | 2.6849 | 0.9646 |
| 1.1012 | 52.0 | 3744 | 2.5384 | 0.9621 |
| 1.1077 | 53.0 | 3816 | 2.4505 | 1.0067 |
| 1.0936 | 54.0 | 3888 | 2.5847 | 0.9687 |
| 1.0772 | 55.0 | 3960 | 2.4575 | 0.9761 |
| 1.092 | 56.0 | 4032 | 2.4889 | 0.9802 |
| 1.0868 | 57.0 | 4104 | 2.5885 | 0.9664 |
| 1.0979 | 58.0 | 4176 | 2.6370 | 0.9607 |
| 1.094 | 59.0 | 4248 | 2.6195 | 0.9605 |
| 1.0745 | 60.0 | 4320 | 2.5346 | 0.9834 |
| 1.1057 | 61.0 | 4392 | 2.6879 | 0.9603 |
| 1.0722 | 62.0 | 4464 | 2.5426 | 0.9735 |
| 1.0731 | 63.0 | 4536 | 2.8259 | 0.9535 |
| 1.0862 | 64.0 | 4608 | 2.7632 | 0.9559 |
| 1.0396 | 65.0 | 4680 | 2.5401 | 0.9807 |
| 1.0581 | 66.0 | 4752 | 2.6977 | 0.9687 |
| 1.0647 | 67.0 | 4824 | 2.6968 | 0.9694 |
| 1.0549 | 68.0 | 4896 | 2.6439 | 0.9807 |
| 1.0607 | 69.0 | 4968 | 2.6822 | 0.9771 |
| 1.05 | 70.0 | 5040 | 2.7011 | 0.9607 |
| 1.042 | 71.0 | 5112 | 2.5766 | 0.9713 |
| 1.042 | 72.0 | 5184 | 2.5720 | 0.9747 |
| 1.0594 | 73.0 | 5256 | 2.7176 | 0.9704 |
| 1.0425 | 74.0 | 5328 | 2.7458 | 0.9614 |
| 1.0199 | 75.0 | 5400 | 2.5906 | 0.9987 |
| 1.0198 | 76.0 | 5472 | 2.5534 | 1.0087 |
| 1.0193 | 77.0 | 5544 | 2.5421 | 0.9933 |
| 1.0379 | 78.0 | 5616 | 2.5139 | 0.9994 |
| 1.025 | 79.0 | 5688 | 2.4850 | 1.0313 |
| 1.0054 | 80.0 | 5760 | 2.5803 | 0.9814 |
| 1.0218 | 81.0 | 5832 | 2.5696 | 0.9867 |
| 1.0177 | 82.0 | 5904 | 2.6011 | 1.0065 |
| 1.0094 | 83.0 | 5976 | 2.6166 | 0.9855 |
| 1.0202 | 84.0 | 6048 | 2.5557 | 1.0204 |
| 1.0148 | 85.0 | 6120 | 2.6118 | 1.0033 |
| 1.0117 | 86.0 | 6192 | 2.5671 | 1.0120 |
| 1.0195 | 87.0 | 6264 | 2.5443 | 1.0041 |
| 1.0114 | 88.0 | 6336 | 2.5627 | 1.0049 |
| 1.0074 | 89.0 | 6408 | 2.5670 | 1.0255 |
| 0.9883 | 90.0 | 6480 | 2.5338 | 1.0306 |
| 1.0112 | 91.0 | 6552 | 2.5615 | 1.0142 |
| 0.9986 | 92.0 | 6624 | 2.5566 | 1.0415 |
| 0.9939 | 93.0 | 6696 | 2.5728 | 1.0287 |
| 0.9954 | 94.0 | 6768 | 2.5617 | 1.0138 |
| 0.9643 | 95.0 | 6840 | 2.5890 | 1.0145 |
| 0.9892 | 96.0 | 6912 | 2.5918 | 1.0119 |
| 0.983 | 97.0 | 6984 | 2.5862 | 1.0175 |
| 0.988 | 98.0 | 7056 | 2.5873 | 1.0147 |
| 0.9908 | 99.0 | 7128 | 2.5973 | 1.0073 |
| 0.9696 | 100.0 | 7200 | 2.5938 | 1.0156 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
Evelyn18/roberta-base-spanish-squades-becasIncentivos1 | Evelyn18 | 2022-07-27T03:13:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-07-27T03:00:53Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: roberta-base-spanish-squades-becasIncentivos1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-spanish-squades-becasIncentivos1
This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 11
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 2.1580 |
| No log | 2.0 | 12 | 1.7889 |
| No log | 3.0 | 18 | 1.8939 |
| No log | 4.0 | 24 | 2.1401 |
| No log | 5.0 | 30 | 2.1943 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest | AykeeSalazar | 2022-07-27T02:12:37Z | 52 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-07-26T07:53:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vc-bantai-vit-withoutAMBI-adunest
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: Violation-Classification---Raw-6
metrics:
- name: Accuracy
type: accuracy
value: 0.9388646288209607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI-adunest
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1950
- Accuracy: 0.9389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4821 | 0.11 | 100 | 0.7644 | 0.6714 |
| 0.7032 | 0.23 | 200 | 0.5568 | 0.75 |
| 0.5262 | 0.34 | 300 | 0.4440 | 0.7806 |
| 0.4719 | 0.45 | 400 | 0.3893 | 0.8144 |
| 0.5021 | 0.57 | 500 | 0.5129 | 0.8090 |
| 0.3123 | 0.68 | 600 | 0.4536 | 0.7980 |
| 0.3606 | 0.79 | 700 | 0.3679 | 0.8483 |
| 0.4081 | 0.91 | 800 | 0.3335 | 0.8559 |
| 0.3624 | 1.02 | 900 | 0.3149 | 0.8592 |
| 0.1903 | 1.14 | 1000 | 0.3296 | 0.8766 |
| 0.334 | 1.25 | 1100 | 0.2832 | 0.8897 |
| 0.2731 | 1.36 | 1200 | 0.2546 | 0.8930 |
| 0.311 | 1.48 | 1300 | 0.2585 | 0.8908 |
| 0.3209 | 1.59 | 1400 | 0.2701 | 0.8854 |
| 0.4005 | 1.7 | 1500 | 0.2643 | 0.8897 |
| 0.3128 | 1.82 | 1600 | 0.2864 | 0.8843 |
| 0.3376 | 1.93 | 1700 | 0.2882 | 0.8657 |
| 0.2698 | 2.04 | 1800 | 0.2876 | 0.9028 |
| 0.2347 | 2.16 | 1900 | 0.2405 | 0.8974 |
| 0.2436 | 2.27 | 2000 | 0.2804 | 0.8886 |
| 0.1764 | 2.38 | 2100 | 0.2852 | 0.8952 |
| 0.1197 | 2.5 | 2200 | 0.2312 | 0.9127 |
| 0.1082 | 2.61 | 2300 | 0.2133 | 0.9116 |
| 0.1245 | 2.72 | 2400 | 0.2677 | 0.8985 |
| 0.1335 | 2.84 | 2500 | 0.2098 | 0.9181 |
| 0.2194 | 2.95 | 2600 | 0.1911 | 0.9127 |
| 0.089 | 3.06 | 2700 | 0.2062 | 0.9181 |
| 0.0465 | 3.18 | 2800 | 0.2414 | 0.9247 |
| 0.0985 | 3.29 | 2900 | 0.1869 | 0.9389 |
| 0.1113 | 3.41 | 3000 | 0.1819 | 0.9323 |
| 0.1392 | 3.52 | 3100 | 0.2101 | 0.9312 |
| 0.0621 | 3.63 | 3200 | 0.2201 | 0.9367 |
| 0.1168 | 3.75 | 3300 | 0.1935 | 0.9389 |
| 0.059 | 3.86 | 3400 | 0.1946 | 0.9367 |
| 0.0513 | 3.97 | 3500 | 0.1950 | 0.9389 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bigmorning/distilgpt_new5_0040 | bigmorning | 2022-07-27T01:38:57Z | 3 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-27T01:33:32Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt_new5_0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt_new5_0040
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4633
- Validation Loss: 2.3432
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4839 | 2.3639 | 0 |
| 2.4833 | 2.3630 | 1 |
| 2.4827 | 2.3620 | 2 |
| 2.4821 | 2.3632 | 3 |
| 2.4816 | 2.3617 | 4 |
| 2.4811 | 2.3614 | 5 |
| 2.4805 | 2.3613 | 6 |
| 2.4799 | 2.3613 | 7 |
| 2.4794 | 2.3600 | 8 |
| 2.4788 | 2.3589 | 9 |
| 2.4784 | 2.3582 | 10 |
| 2.4779 | 2.3563 | 11 |
| 2.4774 | 2.3579 | 12 |
| 2.4768 | 2.3563 | 13 |
| 2.4762 | 2.3561 | 14 |
| 2.4756 | 2.3554 | 15 |
| 2.4751 | 2.3539 | 16 |
| 2.4746 | 2.3550 | 17 |
| 2.4741 | 2.3534 | 18 |
| 2.4736 | 2.3530 | 19 |
| 2.4731 | 2.3522 | 20 |
| 2.4725 | 2.3522 | 21 |
| 2.4719 | 2.3525 | 22 |
| 2.4714 | 2.3519 | 23 |
| 2.4709 | 2.3505 | 24 |
| 2.4705 | 2.3489 | 25 |
| 2.4699 | 2.3488 | 26 |
| 2.4694 | 2.3498 | 27 |
| 2.4689 | 2.3472 | 28 |
| 2.4683 | 2.3476 | 29 |
| 2.4679 | 2.3477 | 30 |
| 2.4675 | 2.3468 | 31 |
| 2.4668 | 2.3454 | 32 |
| 2.4665 | 2.3455 | 33 |
| 2.4659 | 2.3456 | 34 |
| 2.4655 | 2.3436 | 35 |
| 2.4649 | 2.3433 | 36 |
| 2.4644 | 2.3437 | 37 |
| 2.4638 | 2.3428 | 38 |
| 2.4633 | 2.3432 | 39 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
vish88/xlnet-base-mnli-fer-finetuned | vish88 | 2022-07-27T00:59:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-25T16:38:02Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlnet-base-mnli-fer-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-mnli-fer-finetuned
This model is a fine-tuned version of [clevrly/xlnet-base-mnli-finetuned](https://huggingface.co/clevrly/xlnet-base-mnli-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0152
- Accuracy: 0.7794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5828 | 1.0 | 2219 | 0.9689 | 0.7277 |
| 0.578 | 2.0 | 4438 | 1.1408 | 0.7310 |
| 0.5027 | 3.0 | 6657 | 0.9754 | 0.7742 |
| 0.4233 | 4.0 | 8876 | 1.0719 | 0.7751 |
| 0.3026 | 5.0 | 11095 | 1.0152 | 0.7794 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ultra-coder54732/MiniLM-prop-16-train-set | ultra-coder54732 | 2022-07-27T00:45:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-27T00:40:56Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: MiniLM-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLM-prop-16-train-set
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ultra-coder54732/distilbert-prop-16-train-set | ultra-coder54732 | 2022-07-27T00:33:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-26T03:05:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-prop-16-train-set
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ultra-coder54732/robertabaseproper-prop-16-train-set | ultra-coder54732 | 2022-07-27T00:19:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-27T00:02:27Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: robertabaseproper-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robertabaseproper-prop-16-train-set
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
planhanasan/test-trainer | planhanasan | 2022-07-27T00:09:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"generated_from_trainer",
"ja",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-23T06:45:29Z | ---
license: apache-2.0
language:
- ja
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-finetuned-synthetic-translated-only | domenicrosati | 2022-07-26T22:34:44Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-26T13:48:09Z | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- f1
- precision
- recall
model-index:
- name: deberta-v3-large-finetuned-synthetic-translated-only
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-finetuned-synthetic-translated-only
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- F1: 0.9961
- Precision: 1.0
- Recall: 0.9922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:---------:|:------:|
| 0.0065 | 1.0 | 10158 | 0.0022 | 0.9887 | 0.9962 | 0.9813 |
| 0.0006 | 2.0 | 20316 | 0.0030 | 0.9887 | 0.9962 | 0.9813 |
| 0.0008 | 3.0 | 30474 | 0.0029 | 0.9906 | 0.9962 | 0.9851 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Amalq/schizophrenia-roberta-large | Amalq | 2022-07-26T21:38:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Transformers",
"en",
"arxiv:1806.05258",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-26T20:07:00Z | ---
language: en
tags:
- Transformers
license: apache-2.0
datasets:
- SMHD
- Schizophrenia Reddit
---
# SchizophreniaRoberta model
is a model initialized with [roberta-large](https://huggingface.co/roberta-large) and trained with Schizophrenia Reddit, a subset of [Self-Reported Mental Health Diagnoses (SMHD) dataset](https://arxiv.org/pdf/1806.05258.pdf) which consists of Reddit posts by patients with schizophrenia only or schizophrenia with other mental disorders and matched control. We follow the standard pretraining protocols of RoBERTa with [Huggingface’s Transformers library](https://github.com/huggingface/transformers).
## Usage Load the model via [Huggingface’s Transformers library](https://github.com/huggingface/transformers):
from transformers import AutoTokenizer,
AutoModel tokenizer = AutoTokenizer.from_pretrained("Amalq/schizophrenia-roberta-large")
model = AutoModel.from_pretrained("Amalq/schizophrenia-roberta-large")
Perplexity of this model is: 4.43 |
JoAmps/xlm-roberta-base-finetuned-panx-de | JoAmps | 2022-07-26T21:36:51Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-07-26T21:12:58Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8616051071591427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1378
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2569 | 1.0 | 525 | 0.1617 | 0.8228 |
| 0.1295 | 2.0 | 1050 | 0.1326 | 0.8514 |
| 0.0816 | 3.0 | 1575 | 0.1378 | 0.8616 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
affahrizain/distilbert-base-uncased-mlm-finetuned-imdb | affahrizain | 2022-07-26T20:44:44Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-26T19:19:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-mlm-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-mlm-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7029 | 1.0 | 391 | 0.6418 |
| 0.6557 | 2.0 | 782 | 0.6315 |
| 0.6482 | 3.0 | 1173 | 0.6268 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mcsabai/huBert-fine-tuned-hungarian-squadv2 | mcsabai | 2022-07-26T18:35:08Z | 63 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"question-answering",
"hu",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-07-06T07:34:04Z | ---
language: hu
thumbnail:
tags:
- question-answering
- bert
widget:
- text: "Melyik folyó szeli ketté Budapestet?"
context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban."
- text: "Mivel juthatunk fel az Óvárosba?"
context: "Magyarország fővárosát, Budapestet a Duna folyó szeli ketté. A XIX. században épült Lánchíd a dimbes-dombos budai oldalt köti össze a sík Pesttel. A Várdomb oldalában futó siklóval juthatunk fel a budai Óvárosba, ahol a Budapesti Történeti Múzeum egészen a római időkig visszavezetve mutatja be a városi életet. A Szentháromság tér ad otthont a XIII. századi Mátyás-templomnak és a Halászbástya lőtornyainak, amelyekből messzire ellátva gyönyörködhetünk a városban."
---
## MODEL DESCRIPTION
huBERT base model (cased) fine-tuned on SQuADv2 (NEW!)
- huBert model + Tokenizer: https://huggingface.co/SZTAKI-HLT/hubert-base-cc
- Hungarian SQUADv2 dataset: Machine Translated SQuAD dataset (Google Translate API)
<p> <i> "SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.[1]" </i> </p>
## Model in action
- Fast usage with pipelines:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mcsabai/huBert-fine-tuned-hungarian-squadv2",
tokenizer="mcsabai/huBert-fine-tuned-hungarian-squadv2",
topk = 1,
handle_impossible_answer = True
)
predictions = qa_pipeline({
'context': "Máté vagyok és Budapesten élek már több mint 4 éve.",
'question': "Hol lakik Máté?"
})
print(predictions)
# output:
# {'score': 0.9892364144325256, 'start': 16, 'end': 26, 'answer': 'Budapesten'}
```
Two important parameter:
- <p> <b> topk </b> (int, optional, defaults to 1) — The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context. </p>
- <p> <b> handle_impossible_answer </b> (bool, optional, defaults to False): Whether or not we accept impossible as an answer. </p>
[1] https://rajpurkar.github.io/SQuAD-explorer/ |
aemami1/distilbert-base-uncased-finetuned-wnli | aemami1 | 2022-07-26T17:02:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-26T16:36:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5492957746478874
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-wnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6950
- Accuracy: 0.5493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6929 | 0.5211 |
| No log | 2.0 | 80 | 0.6951 | 0.4789 |
| No log | 3.0 | 120 | 0.6950 | 0.5493 |
| No log | 4.0 | 160 | 0.6966 | 0.5352 |
| No log | 5.0 | 200 | 0.6966 | 0.5352 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/bearfoothunter1-jockforbrains-recentrift | huggingtweets | 2022-07-26T16:57:52Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T16:37:53Z | ---
language: en
thumbnail: http://www.huggingtweets.com/bearfoothunter1-jockforbrains-recentrift/1658853737112/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492447040193900546/LtTdjrY7_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1550974872502796289/7i5bgWY2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1015932356937560069/EJSUv5Uk_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JockForBrains (☣️ May contain morphs) & Demonic Executioner & the real bearfoothunter 🇺🇦🇺🇦🇺🇦</div>
<div style="text-align: center; font-size: 14px;">@bearfoothunter1-jockforbrains-recentrift</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JockForBrains (☣️ May contain morphs) & Demonic Executioner & the real bearfoothunter 🇺🇦🇺🇦🇺🇦.
| Data | JockForBrains (☣️ May contain morphs) | Demonic Executioner | the real bearfoothunter 🇺🇦🇺🇦🇺🇦 |
| --- | --- | --- | --- |
| Tweets downloaded | 3238 | 2261 | 3248 |
| Retweets | 211 | 177 | 64 |
| Short tweets | 467 | 104 | 746 |
| Tweets kept | 2560 | 1980 | 2438 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2susnztb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bearfoothunter1-jockforbrains-recentrift's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/18fa8jhh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/18fa8jhh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bearfoothunter1-jockforbrains-recentrift')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Cole/distilbert-base-uncased-finetuned-emotion | Cole | 2022-07-26T16:51:59Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-31T14:14:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9274111800508488
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2148
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8308 | 1.0 | 250 | 0.3053 | 0.9075 | 0.9053 |
| 0.2421 | 2.0 | 500 | 0.2148 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/surlaroute | huggingtweets | 2022-07-26T16:42:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T16:39:40Z | ---
language: en
thumbnail: http://www.huggingtweets.com/surlaroute/1658853747255/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1305228695444090882/aU_Vlnvg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Melody 🧜🏻♀️</div>
<div style="text-align: center; font-size: 14px;">@surlaroute</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Melody 🧜🏻♀️.
| Data | Melody 🧜🏻♀️ |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 114 |
| Short tweets | 351 |
| Tweets kept | 2780 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/k1hti8dn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @surlaroute's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/cffupuun) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/cffupuun/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/surlaroute')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jockforbrains | huggingtweets | 2022-07-26T16:25:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T16:23:10Z | ---
language: en
thumbnail: http://www.huggingtweets.com/jockforbrains/1658852709222/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1492447040193900546/LtTdjrY7_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JockForBrains (☣️ May contain morphs)</div>
<div style="text-align: center; font-size: 14px;">@jockforbrains</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JockForBrains (☣️ May contain morphs).
| Data | JockForBrains (☣️ May contain morphs) |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 211 |
| Short tweets | 467 |
| Tweets kept | 2560 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jsjyesm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jockforbrains's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zi3c9sw9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zi3c9sw9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jockforbrains')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fourthbrain-demo/demo | fourthbrain-demo | 2022-07-26T16:08:23Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-26T16:07:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
fourthbrain-demo/bert_model_reddit_tsla_tracked_actions | fourthbrain-demo | 2022-07-26T16:00:15Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-26T10:14:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert_model_reddit_tsla_tracked_actions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model_reddit_tsla_tracked_actions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
schnell/bert-small-juman-unigram | schnell | 2022-07-26T15:40:16Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-23T16:53:56Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-small-juman-unigram
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-juman-unigram
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4490
- Accuracy: 0.6911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 768
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 14
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.9849 | 1.0 | 69472 | 1.8385 | 0.6286 |
| 1.8444 | 2.0 | 138944 | 1.6912 | 0.6513 |
| 1.7767 | 3.0 | 208416 | 1.6322 | 0.6610 |
| 1.7357 | 4.0 | 277888 | 1.5931 | 0.6676 |
| 1.709 | 5.0 | 347360 | 1.5636 | 0.6719 |
| 1.6874 | 6.0 | 416832 | 1.5405 | 0.6756 |
| 1.6707 | 7.0 | 486304 | 1.5221 | 0.6786 |
| 1.6511 | 8.0 | 555776 | 1.5061 | 0.6817 |
| 1.636 | 9.0 | 625248 | 1.4933 | 0.6837 |
| 1.6295 | 10.0 | 694720 | 1.4784 | 0.6860 |
| 1.6157 | 11.0 | 764192 | 1.4673 | 0.6879 |
| 1.6027 | 12.0 | 833664 | 1.4605 | 0.6896 |
| 1.5942 | 13.0 | 903136 | 1.4535 | 0.6904 |
| 1.5866 | 14.0 | 972608 | 1.4490 | 0.6911 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.12.0+cu116
- Datasets 2.2.2
- Tokenizers 0.12.1
|
SummerChiam/rust_image_classification_5 | SummerChiam | 2022-07-26T15:16:23Z | 52 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-07-26T15:16:12Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_5
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9392405152320862
---
# rust_image_classification_5
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
swtx/ernie-3.0-base-chinese | swtx | 2022-07-26T14:58:41Z | 147 | 16 | transformers | [
"transformers",
"pytorch",
"arxiv:2106.02241",
"arxiv:2112.12731",
"arxiv:2107.02137",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-07-08T09:14:51Z | ---
license: apache-2.0
---
# ERNIE 3.0 轻量级模型
**目录**
* [模型介绍](#模型介绍)
* [在线蒸馏技术](#在线蒸馏技术)
* [模型效果](#模型效果)
* [微调](#微调)
* [模型压缩](#模型压缩)
* [环境依赖](#环境依赖)
* [模型压缩 API 使用](#模型压缩API使用)
* [压缩效果](#压缩效果)
* [精度测试](#精度测试)
* [性能测试](#性能测试)
* [CPU 性能](#CPU性能)
* [GPU 性能](#CPU性能)
* [使用 FasterTokenizer 加速](#使用FasterTokenizer加速)
* [部署](#部署)
* [Python 部署](#Python部署)
* [服务化部署](#服务化部署)
* [Paddle2ONNX 部署](#Paddle2ONNX部署)
* [Notebook教程](#Notebook教程)
* [参考文献](#参考文献)
<a name="模型介绍"></a>
## 模型介绍
本次开源的模型是在文心大模型ERNIE 3.0 基础上通过**在线蒸馏技术**得到的轻量级模型,模型结构与 ERNIE 2.0 保持一致,相比 ERNIE 2.0 具有更强的中文效果。
相关技术详解可参考文章[《解析全球最大中文单体模型鹏城-百度·文心技术细节》](https://www.jiqizhixin.com/articles/2021-12-08-9)
<a name="在线蒸馏技术"></a>
### 在线蒸馏技术
在线蒸馏技术在模型学习的过程中周期性地将知识信号传递给若干个学生模型同时训练,从而在蒸馏阶段一次性产出多种尺寸的学生模型。相对传统蒸馏技术,该技术极大节省了因大模型额外蒸馏计算以及多个学生的重复知识传递带来的算力消耗。
这种新颖的蒸馏方式利用了文心大模型的规模优势,在蒸馏完成后保证了学生模型的效果和尺寸丰富性,方便不同性能需求的应用场景使用。此外,由于文心大模型的模型尺寸与学生模型差距巨大,模型蒸馏难度极大甚至容易失效。为此,通过引入了助教模型进行蒸馏的技术,利用助教作为知识传递的桥梁以缩短学生模型和大模型表达空间相距过大的问题,从而促进蒸馏效率的提升。
更多技术细节可以参考论文:
- [ERNIE-Tiny: A Progressive Distillation Framework for Pretrained Transformer Compression](https://arxiv.org/abs/2106.02241)
- [ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation](https://arxiv.org/abs/2112.12731)
<p align="center">
<img width="644" alt="image" src="https://user-images.githubusercontent.com/1371212/168516904-3fff73e0-010d-4bef-adc1-4d7c97a9c6ff.png" title="ERNIE 3.0 Online Distillation">
</p>
<a name="模型效果"></a>
## 模型效果
本项目开源 **ERNIE 3.0 _Base_** 、**ERNIE 3.0 _Medium_** 、 **ERNIE 3.0 _Mini_** 、 **ERNIE 3.0 _Micro_** 、 **ERNIE 3.0 _Nano_** 五个模型:
- [**ERNIE 3.0-_Base_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_base_zh.pdparams) (_12-layer, 768-hidden, 12-heads_)
- [**ERNIE 3.0-_Medium_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams) (_6-layer, 768-hidden, 12-heads_)
- [**ERNIE 3.0-_Mini_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_mini_zh.pdparams) (_6-layer, 384-hidden, 12-heads_)
- [**ERNIE 3.0-_Micro_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_micro_zh.pdparams) (_4-layer, 384-hidden, 12-heads_)
- [**ERNIE 3.0-_Nano_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh.pdparams) (_4-layer, 312-hidden, 12-heads_)
下面是 PaddleNLP 中轻量级中文模型的**效果-时延图**。横坐标表示在 IFLYTEK 数据集 (最大序列长度设置为 128) 上测试的延迟(latency,单位:ms),纵坐标是 CLUE 10 个任务上的平均精度(包含文本分类、文本匹配、自然语言推理、代词消歧、阅读理解等任务),其中 CMRC2018 阅读理解任务的评价指标是 Exact Match(EM),其他任务的评价指标均是 Accuracy。图中越靠**左上**的模型,精度和性能水平越高。
图中模型名下方标注了模型的参数量,测试环境见[性能测试](#性能测试)。
batch_size=32 时,CPU 下的效果-时延图(线程数 1 和 8):
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852121-2798b5c9-d122-4ac0-b4c8-da46b89b5512.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852129-bbe58835-8eec-45d5-a4a9-cc2cf9a3db6a.png"></a></td>
</tr>
</table>
batch_size=1 时,CPU 下的效果-时延图(线程数 1 和 8):
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852106-658e18e7-705b-4f53-bad0-027281163ae3.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175852112-4b89d675-7c95-4d75-84b6-db5a6ea95e2c.png"></a></td>
</tr>
</table>
batch_size=32 和 1,预测精度为 FP16 时,GPU 下的效果-时延图:
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175854679-3247f42e-8716-4a36-b5c6-9ce4661b36c7.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175854670-57878b34-c213-47ac-b620-aaaec082f435.png"></a></td>
</tr>
</table>
从图上可看出,ERNIE 3.0 系列轻量级模型在精度和性能上的综合表现已全面领先于 UER-py、Huawei-Noah 以及 HFL 的中文模型。且当 batch_size=1、预测精度为 FP16 时,在 GPU 上宽且浅的模型的推理性能更有优势。
在 CLUE **验证集**上评测指标如下表所示:
<table style="width:100%;" cellpadding="2" cellspacing="0" border="1" bordercolor="#000000">
<tbody>
<tr>
<td style="text-align:center;vertical-align:middle">
<span style="font-size:18px;">Arch</span>
</td>
<td style="text-align:center">
<span style="font-size:18px;">Model</span>
</td>
<td style="text-align:center">
<span style="font-size:18px;">AVG</span>
</td>
<td style="text-align:center">
<span style="font-size:18px;">AFQMC</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">TNEWS</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">IFLYTEK</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CMNLI</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">OCNLI</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CLUEWSC2020</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CSL</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CMRC2018</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">CHID</span>
</td>
<td style="text-align:center;">
<span style="font-size:18px;">C<sup>3</sup></span>
</td>
</tr>
<tr>
<td rowspan=2 align=center> 24L1024H </td>
<td style="text-align:center">
<span style="font-size:18px"><b>ERNIE 2.0-Large-zh</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>77.03</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.41</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>59.67</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.29</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>79.69</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">89.14</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.10</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>71.48/90.35</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">85.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>78.12</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">RoBERTa-wwm-ext-large</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.61</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.00</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>83.88</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.81</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>90.79</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.67</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.58/89.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>85.72</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.26</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 20L1024H </td>
<td style="text-align:center">
<span style="font-size:18px"><b>ERNIE 3.0-Xbase-zh</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>78.71</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.85</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>59.89</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.41</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.76</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>82.51</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>89.80</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.47</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>75.49/92.67</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>86.36</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.59</b></span>
</td>
</tr>
<tr>
<td rowspan=8 align=center> 12L768H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_base_zh.pdparams">
ERNIE 3.0-Base-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.05</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>75.93</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.56</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>83.02</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>80.10</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">86.18</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.71/90.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>84.26</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>77.88</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE-Gram-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.72</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.28</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.88</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.87</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>88.82</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>82.83</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>71.82/90.38</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">84.04</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.69</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 2.0-Base-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.25</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.72</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.81</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">84.21</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.22/88.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.19</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">Langboat/Mengzi-BERT-Base</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.69</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.35</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.76</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">88.16</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.20</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.04/88.35</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.74</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.70</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">ERNIE 1.0-Base-zh</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.17</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.84</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>58.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.25</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.68</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.58</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">85.20</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.32/87.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.47</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.68</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">RoBERTa-wwm-ext</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.60</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.23</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.92</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">88.49</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.39/88.50</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">83.43</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.03</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">BERT-Base-Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.57</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.29</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.97</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.22</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.30/86.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">82.01</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.38</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Base</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.89</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.62</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">61.14</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.01</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.56</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.58</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.80</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.87/84.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.76</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 8L512H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Medium</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.06</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.10</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.29</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.35</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.09</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.63/78.91</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.84</span>
</td>
</tr>
<tr>
<td rowspan=5 align=center> 6L768H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams">
ERNIE 3.0-Medium-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>72.49</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>73.37</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>57.00</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.67</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>80.64</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>76.88</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>79.28</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>81.60</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>65.83/87.30</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>79.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>69.73</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">HLF/RBT6, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.06</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.45</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.36</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.32</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.67</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.72/84.77</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.17</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.85</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">TinyBERT<sub>6</sub>, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.62</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.22</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.70</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.48</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.12</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">80.17</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.03/83.75</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.11</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">RoFormerV2 Small</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.47</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>60.72</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.37</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.00</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">81.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.97/83.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.66</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.41</span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-L6-H768</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.09</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.54</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.48</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.49</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.00</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.04</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.74/75.52</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.73</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.40</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 6L384H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_mini_zh.pdparams">
ERNIE 3.0-Mini-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.85</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.24</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.48</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.19</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">79.30</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.53/81.97</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.60</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L768H </td>
<td style="text-align:center">
<span style="font-size:18px">HFL/RBT4, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.42</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">72.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.50</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">77.34</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">78.23</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.30/81.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.18</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.45</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L512H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Small</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.25</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.21</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.552</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.64</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.80</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.78</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">46.75/69.69</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.59</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">50.92</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L384H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_micro_zh.pdparams">
ERNIE 3.0-Micro-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">64.21</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.15</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.05</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.83</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">74.81</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.08</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.50</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.77/77.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">62.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.53</span>
</td>
</tr>
<tr>
<td rowspan=2 align=center> 4L312H </td>
<td style="text-align:center">
<span style="font-size:18px">
<a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh.pdparams">
ERNIE 3.0-Nano-zh
</a>
</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>62.97</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>70.51</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>54.57</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>48.36</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>74.97</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>70.61</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">68.75</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>75.93</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>52.00/76.35</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>58.91</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>55.11</b></span>
</td>
</tr>
<tr>
<td style="text-align:center">
<span style="font-size:18px">TinyBERT<sub>4</sub>, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">60.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">39.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">73.94</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.59</span>
</td>
<td style="text-align:center">
<span style="font-size:18px"><b>70.07</b></span>
</td>
<td style="text-align:center">
<span style="font-size:18px">75.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">46.04/69.34</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">52.18</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 4L256H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Mini</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">53.40</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.32</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.22</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">41.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.40</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.36</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.07</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">5.96/17.13</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">51.19</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">39.68</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 3L1024H </td>
<td style="text-align:center">
<span style="font-size:18px">HFL/RBTL3, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">66.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">56.14</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.56</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.41</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.29</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.74</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.93</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">58.50/80.90</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">71.03</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.56</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 3L768H </td>
<td style="text-align:center">
<span style="font-size:18px">HFL/RBT3, Chinese</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">65.72</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.53</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.18</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.20</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.71</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.11</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">76.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">55.73/78.63</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">70.26</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">54.93</span>
</td>
</tr>
<tr>
<td rowspan=1 align=center> 2L128H </td>
<td style="text-align:center">
<span style="font-size:18px">UER/Chinese-RoBERTa-Tiny</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">44.45</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">69.02</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">51.47</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">20.28</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">59.95</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">57.73</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">63.82</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">67.43</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">3.08/14.33</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">23.57</span>
</td>
<td style="text-align:center">
<span style="font-size:18px">28.12</span>
</td>
</tr>
<tbody>
</table>
<br />
以下是本项目目录结构及说明:
```shell
.
├── run_seq_cls.py # 分类任务的微调脚本
├── run_token_cls.py # 序列标注任务的微调脚本
├── run_qa.py # 阅读理解任务的微调脚本
├── compress_seq_cls.py # 分类任务的压缩脚本
├── compress_token_cls.py # 序列标注任务的压缩脚本
├── compress_qa.py # 阅读理解任务的压缩脚本
├── config.yml # 压缩配置文件
├── infer.py # 支持 CLUE 分类、CLUE CMRC2018、MSRA_NER 任务的预测脚本
├── deploy # 部署目录
│ └── python
│ └── ernie_predictor.py
│ └── infer_cpu.py
│ └── infer_gpu.py
│ └── README.md
│ └── serving
│ └── seq_cls_rpc_client.py
│ └── seq_cls_service.py
│ └── seq_cls_config.yml
│ └── token_cls_rpc_client.py
│ └── token_cls_service.py
│ └── token_cls_config.yml
│ └── README.md
│ └── paddle2onnx
│ └── ernie_predictor.py
│ └── infer.py
│ └── README.md
└── README.md # 文档,本文件
```
<a name="微调"></a>
## 微调
ERNIE 3.0 发布的预训练模型还不能直接在下游任务上直接使用,需要使用具体任务上的数据对预训练模型进行微调。
使用 PaddleNLP 只需要一行代码可以拿到 ERNIE 3.0 系列模型,之后可以在自己的下游数据下进行微调,从而获得具体任务上效果更好的模型。
```python
from paddlenlp.transformers import *
tokenizer = AutoTokenizer.from_pretrained("ernie-3.0-medium-zh")
# 用于分类任务
seq_cls_model = AutoModelForSequenceClassification.from_pretrained("ernie-3.0-medium-zh")
# 用于序列标注任务
token_cls_model = AutoModelForTokenClassification.from_pretrained("ernie-3.0-medium-zh")
# 用于阅读理解任务
qa_model = AutoModelForQuestionAnswering.from_pretrained("ernie-3.0-medium-zh")
```
本项目提供了针对分类(包含文本分类、文本匹配、自然语言推理、代词消歧等任务)、序列标注、阅读理解三大场景下微调的示例脚本,可分别参考 `run_seq_cls.py` 、`run_token_cls.py`、`run_qa.py` 三个脚本,启动方式如下:
```shell
# 分类任务
python run_seq_cls.py --task_name tnews --model_name_or_path ernie-3.0-medium-zh --do_train
# 序列标注任务
python run_token_cls.py --task_name msra_ner --model_name_or_path ernie-3.0-medium-zh --do_train
# 阅读理解任务
python run_qa.py --model_name_or_path ernie-3.0-medium-zh --do_train
```
<a name="模型压缩"></a>
## 模型压缩
尽管 ERNIE 3.0 已提供了效果不错的 6 层、4 层轻量级模型可以微调后直接使用,但如果有模型部署上线的需求,则需要进一步压缩模型体积,可以使用这里提供的一套模型压缩方案及 API 对上一步微调后的模型进行压缩。
<a name="环境依赖"></a>
### 环境依赖
使用裁剪功能需要安装 paddleslim 包
```shell
pip install paddleslim
```
<a name="模型压缩API使用"></a>
### 模型压缩 API 使用
本项目基于 PaddleNLP 的 Trainer API 发布提供了模型压缩 API。压缩 API 支持用户对 ERNIE、BERT 等 Transformers 类下游任务微调模型进行裁剪、量化。用户只需要简单地调用 `compress()` 即可一键启动裁剪和量化,并自动保存压缩后的模型。
可以这样使用压缩 API (示例代码只提供了核心调用,如需跑通完整的例子可参考下方完整样例脚本):
```python
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer)
output_dir = os.path.join(model_args.model_name_or_path, "compress")
compress_config = CompressConfig(quantization_config=PTQConfig(
algo_list=['hist', 'mse'], batch_size_list=[4, 8, 16]),
DynabertConfig(width_mul_ist=[3/4]))
trainer.compress(
output_dir,
pruning=True, # 开启裁剪
quantization=True, # 开启量化
compress_config=compress_config)
```
由于压缩 API 基于 Trainer,所以首先需要初始化一个 Trainer 实例,对于模型压缩来说必要传入的参数如下:
- `model`:ERNIE、BERT 等模型,是在下游任务中微调后的模型。以分类模型为例,可通过`AutoModelForSequenceClassification.from_pretrained(model_name_or_path)` 来获取
- `data_collator`:三类任务均可使用 PaddleNLP 预定义好的[DataCollator 类](../../paddlenlp/data/data_collator.py),`data_collator` 可对数据进行 `Pad` 等操作。使用方法参考本项目中代码即可
- `train_dataset`:裁剪训练需要使用的训练集
- `eval_dataset`:裁剪训练使用的评估集,也是量化使用的校准数据
- `tokenizer`:模型`model`对应的 `tokenizer`,可使用 `AutoTokenizer.from_pretrained(model_name_or_path)` 来获取
然后可以直接调用 `compress` 启动压缩,其中 `compress` 的参数释义如下:
- `output_dir`:裁剪、量化后的模型保存目录
- `pruning`:是否裁剪,默认为`True`
- `quantization`:是否量化,默认为 `True`
- `compress_config`:压缩配置,需要分别传入裁剪和量化的配置实例。目前裁剪和量化分别仅支持`DynabertConfig`和`PTQConfig`类。当默认参数不满足需求时,可通过传入参数对压缩过程进行特殊配置:
其中,`DynabertConfig`中可以传的参数有:
- `width_mult_list`:裁剪宽度保留的比例list,对 6 层模型推荐 `3/4` ,对 12 层模型推荐 `2/3`,表示对 `q`、`k`、`v` 以及 `ffn` 权重宽度的保留比例。默认是 `[3/4]`
- `output_filename_prefix`:裁剪导出模型的文件名前缀,默认是`"float32"`
`PTQConfig`中可以传的参数有:
- `algo_list`:量化策略列表,目前支持 `KL`, `abs_max`, `min_max`, `avg`, `hist`和`mse`,不同的策略计算量化比例因子的方法不同。建议传入多种策略,可批量得到由多种策略产出的多个量化模型,从中选择最优模型。推荐`hist`, `mse`, `KL`,默认是`["hist"]`
- `batch_size_list`:校准样本数,默认是 `[4]`。并非越大越好,也是一个超参数,建议传入多种校准样本数,可从多个量化模型中选择最优模型。
- `input_dir`:待量化模型的目录。如果是 `None`,当不启用裁剪时,表示待量化的模型是 `Trainer` 初始化的模型;当启用裁剪时,表示待量化的模型是裁剪后导出的模型。默认是`None`
- `input_filename_prefix`:待量化模型文件名前缀,默认是 `"float32"`
- `output_filename_prefix`:导出的量化模型文件名后缀,默认是`"int8"`
本项目还提供了压缩 API 在分类(包含文本分类、文本匹配、自然语言推理、代词消歧等任务)、序列标注、阅读理解三大场景下的使用样例,可以分别参考 `compress_seq_cls.py` 、`compress_token_cls.py`、`compress_qa.py`,启动方式如下:
```shell
# --model_name_or_path 参数传入的是上面微调过程后得到的模型所在目录,压缩后的模型也会在该目录下
# 分类任务
python compress_seq_cls.py --dataset "clue tnews" --model_name_or_path best_models/TNEWS --output_dir ./
# 序列标注任务
python compress_token_cls.py --dataset "msra_ner" --model_name_or_path best_models/MSRA_NER --output_dir ./
# 阅读理解任务
python compress_seq_cls.py --dataset "clue cmrc2018" --model_name_or_path best_models/CMRC2018 --output_dir ./
```
一行代码验证上面模型压缩后模型的精度:
```shell
# 原模型
python infer.py --task_name tnews --model_path best_models/TNEWS/compress/inference/infer --use_trt
# 裁剪后
python infer.py --task_name tnews --model_path best_models/TNEWS/compress/0.75/float --use_trt
# 量化后
python infer.py --task_name tnews --model_path best_models/TNEWS/compress/0.75/hist16/int8 --use_trt --precision int8
```
其中 --model_path 参数需要传入静态图模型的路径和前缀名。
**压缩 API 使用 TIPS:**
1. 模型压缩主要用于加速推理部署,因此压缩后的模型都是静态图模型,不能再通过 `from_pretrained()` API 导入继续训练。
2. 压缩 API `compress()` 默认会启动裁剪和量化,但用户也可以通过在 `compress()` 中设置 pruning=False 或者 quantization=False 来关掉裁剪或者量化过程。目前裁剪策略有额外的训练的过程,需要下游任务的数据,其训练时间视下游任务数据量而定,且和微调的训练时间是一个量级。量化则不需要额外的训练,更快,量化的加速比比裁剪更明显,但是单独量化精度下降可能也更多;
3. 裁剪类似蒸馏过程,方便起见,可以直接使用微调时的超参。如果想要进一步提升精度,可以对 `batch_size`、`learning_rate`、`epoch` 等超参进行 Grid Search;
<a name="压缩效果"></a>
### 压缩效果
<a name="精度测试"></a>
#### 精度测试
本案例中我们对 ERNIE 3.0-Medium 模型在三类任务上微调后的模型使用压缩 API 进行压缩。压缩后精度如下:
| Model | AVG | AFQMC | TNEWS | IFLYTEK | CMNLI | OCNLI | CLUEWSC2020 | CSL | CMRC2018 | MSRA_NER |
| ------------------------------- | ----- | ----- | ----- | ------- | ----- | ----- | ----------- | ----- | ----------- | ----------------- |
| ERNIE 3.0-Medium | 74.87 | 75.35 | 57.45 | 60.18 | 81.16 | 77.19 | 80.59 | 81.93 | 66.95/87.15 | 92.65/93.43/93.04 |
| ERNIE 3.0-Medium+FP16 | 74.87 | 75.32 | 57.45 | 60.22 | 81.16 | 77.22 | 80.59 | 81.90 | 66.95/87.16 | 92.65/93.45/93.05 |
| ERNIE 3.0-Medium+裁剪+FP32 | 74.70 | 75.14 | 57.31 | 60.29 | 81.25 | 77.46 | 79.93 | 81.70 | 65.92/86.43 | 93.10/93.43/93.27 |
| ERNIE 3.0-Medium+裁剪+FP16 | 74.71 | 75.21 | 57.27 | 60.29 | 81.24 | 77.56 | 79.93 | 81.73 | 65.89/86.44 | 93.10/93.43/93.27 |
| ERNIE 3.0-Medium+裁剪+量化+INT8 | 74.44 | 75.02 | 57.26 | 60.37 | 81.03 | 77.25 | 77.96 | 81.67 | 66.17/86.55 | 93.17/93.23/93.20 |
| ERNIE 3.0-Medium+量化+INT8 | 74.10 | 74.67 | 56.99 | 59.91 | 81.03 | 75.05 | 78.62 | 81.60 | 66.32/86.82 | 93.10/92.90/92.70 |
**评价指标说明:** 其中 CLUE 分类任务(AFQMC 语义相似度、TNEWS 文本分类、IFLYTEK 长文本分类、CMNLI 自然语言推理、OCNLI 自然语言推理、CLUEWSC2020 代词消歧、CSL 论文关键词识别)的评价指标是 Accuracy,阅读理解任务 CLUE CMRC2018 的评价指标是 EM (Exact Match) / F1-Score,计算平均值时取 EM,序列标注任务 MSRA_NER 的评价指标是 Precision/Recall/F1-Score,计算平均值时取 F1-Score。
由表可知,`ERNIE 3.0-Medium` 模型经过裁剪和量化后,精度平均下降 0.46,其中裁剪后下降了 0.17,单独量化精度平均下降 0.77。
<a name="性能测试"></a>
#### 性能测试
性能测试的配置如下:
1. 数据集:TNEWS(文本分类)、MSRA_NER(序列标注)、CLUE CMRC2018(阅读理解)
2. 计算卡:T4、CUDA11.2、CuDNN8.2
3. CPU 信息:Intel(R) Xeon(R) Gold 6271C CPU
4. PaddlePaddle 版本:2.3
5. PaddleNLP 版本:2.3
6. 性能数据单位是 QPS。QPS 测试方法:固定 batch size 为 32,测试运行时间 total_time,计算 QPS = total_samples / total_time
7. 精度数据单位:文本分类是 Accuracy,序列标注是 F1-Score,阅读理解是 EM (Exact Match)
<a name="CPU性能"></a>
##### CPU 性能
测试环境及说明如上,测试 CPU 性能时,线程数设置为12。
| | TNEWS 性能 | TNEWS 精度 | MSRA_NER 性能 | MSRA_NER 精度 | CMRC2018 性能 | CMRC2018 精度 |
| -------------------------- | ------------ | ------------ | ------------- | ------------- | ------------- | ------------- |
| ERNIE 3.0-Medium+FP32 | 311.95(1.0X) | 57.45 | 90.91(1.0x) | 93.04 | 33.74(1.0x) | 66.95 |
| ERNIE 3.0-Medium+INT8 | 600.35(1.9x) | 56.57(-0.88) | 141.00(1.6x) | 92.64(-0.40) | 56.51(1.7x) | 66.23(-0.72) |
| ERNIE 3.0-Medium+裁剪+FP32 | 408.65(1.3x) | 57.31(-0.14) | 122.13(1.3x) | 93.27(+0.23) | 48.47(1.4x) | 65.55(-1.40) |
| ERNIE 3.0-Medium+裁剪+INT8 | 704.42(2.3x) | 56.69(-0.76) | 215.58(2.4x) | 92.39(-0.65) | 75.23(2.2x) | 63.47(-3.48) |
三类任务(分类、序列标注、阅读理解)经过相同压缩过程后,加速比达到 2.3 左右。
<a name="GPU性能"></a>
##### GPU 性能
| | TNEWS 性能 | TNEWS 精度 | MSRA_NER 性能 | MSRA_NER 精度 | CMRC2018 性能 | CMRC2018 精度 |
| -------------------------- | ------------- | ------------ | ------------- | ------------- | ------------- | ------------- |
| ERNIE 3.0-Medium+FP32 | 1123.85(1.0x) | 57.45 | 366.75(1.0x) | 93.04 | 146.84(1.0x) | 66.95 |
| ERNIE 3.0-Medium+FP16 | 2672.41(2.4x) | 57.45(0.00) | 840.11(2.3x) | 93.05(0.01) | 303.43(2.1x) | 66.95(0.00) |
| ERNIE 3.0-Medium+INT8 | 3226.26(2.9x) | 56.99(-0.46) | 889.33(2.4x) | 92.70(-0.34) | 348.84(2.4x) | 66.32(-0.63 |
| ERNIE 3.0-Medium+裁剪+FP32 | 1424.01(1.3x) | 57.31(-0.14) | 454.27(1.2x) | 93.27(+0.23) | 183.77(1.3x) | 65.92(-1.03) |
| ERNIE 3.0-Medium+裁剪+FP16 | 3577.62(3.2x) | 57.27(-0.18) | 1138.77(3.1x) | 93.27(+0.23) | 445.71(3.0x) | 65.89(-1.06) |
| ERNIE 3.0-Medium+裁剪+INT8 | 3635.48(3.2x) | 57.26(-0.19) | 1105.26(3.0x) | 93.20(+0.16) | 444.27(3.0x) | 66.17(-0.78) |
三类任务(分类、序列标注、阅读理解)经过裁剪 + 量化后加速比均达到 3 倍左右,所有任务上平均精度损失可控制在 0.5 以内(0.46)。
<a name="使用FasterTokenizer加速"></a>
### 使用 FasterTokenizer 加速
FasterTokenizer 是飞桨提供的速度领先的文本处理算子库,集成了 Google 于 2021 年底发布的 LinMaxMatch 算法,该算法引入 Aho-Corasick 将 WordPiece 的时间复杂度从 O(N<sup>2</sup>) 优化到 O(N),已在 Google 搜索业务中大规模上线。FasterTokenizer 速度显著领先,且呈现 batch_size 越大,优势越突出。例如,设置 batch_size = 64 时,FasterTokenizer 切词速度比 HuggingFace 快 28 倍。
在 ERNIE 3.0 轻量级模型裁剪、量化基础上,当设置切词线程数为 4 时,使用 FasterTokenizer 在 NVIDIA Tesla T4 环境下在 IFLYTEK (长文本分类数据集,最大序列长度为 128)数据集上性能提升了 2.39 倍,相比 BERT-Base 性能提升了 7.09 倍,在 Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz、线程数为 8 的情况下性能提升了 1.27 倍,相比 BERT-Base 性能提升了 5.13 倍。加速效果如下图所示:
<table>
<tr>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175452331-bc5ff646-90ee-4377-85a5-d5b073a8e7f9.png"></a></td>
<td><a><img src="https://user-images.githubusercontent.com/26483581/175452337-e0eff0d3-ed5f-42e7-b06b-caad61f37978.png"></a></td>
</tr>
</table>
使用 FasterTokenizer 的方式非常简单,在安装 faster_tokenizer 包之后,仅需在 tokenizer 实例化时直接传入 `use_faster=True` 即可。目前已在 Linux 系统下支持 BERT、ERNIE、TinyBERT 等模型。
安装 faster_tokenizer 包的命令:
```shell
pip install faster_tokenizer
```
如需设置切词线程数,需要运行前先设置环境变量 `OMP_NUM_THREADS` :
```shell
# 设置切词线程数为 4
export OMP_NUM_THREADS=4
```
调用 `from_pretrained` 时只需轻松传入一个参数 `use_faster=True`:
```python
from paddlenlp.transformers import AutoTokenizer
AutoTokenizer.from_pretrained("ernie-3.0-medium-zh", use_faster=True)
```
<a name="部署"></a>
## 部署
我们为 ERNIE 3.0 提供了多种部署方案,可以满足不同场景下的部署需求,请根据实际情况进行选择。
<p align="center">
<img width="700" alt="image" src="https://user-images.githubusercontent.com/26483581/175260618-610a160c-270c-469a-842c-96871243c4ed.png">
</p>
<a name="Python部署"></a>
### Python 部署
Python部署请参考:[Python部署指南](./deploy/python/README.md)
<a name="服务化部署"></a>
### 服务化部署
- [Triton Inference Server服务化部署指南](./deploy/triton/README.md)
- [Paddle Serving服务化部署指南](./deploy/serving/README.md)
<a name="Paddle2ONNX部署"></a>
### Paddle2ONNX 部署
ONNX 导出及 ONNXRuntime 部署请参考:[ONNX导出及ONNXRuntime部署指南](./deploy/paddle2onnx/README.md)
### Paddle Lite 移动端部署
即将支持,敬请期待
<a name="参考文献"></a>
## Notebook教程
- [【快速上手ERNIE 3.0】中文情感分析实战](https://aistudio.baidu.com/aistudio/projectdetail/3955163)
- [【快速上手ERNIE 3.0】法律文本多标签分类实战](https://aistudio.baidu.com/aistudio/projectdetail/3996601)
- [【快速上手ERNIE 3.0】中文语义匹配实战](https://aistudio.baidu.com/aistudio/projectdetail/3986803)
- [【快速上手ERNIE 3.0】MSRA序列标注实战](https://aistudio.baidu.com/aistudio/projectdetail/3989073)
- [【快速上手ERNIE 3.0】机器阅读理解实战](https://aistudio.baidu.com/aistudio/projectdetail/2017189)
- [【快速上手ERNIE 3.0】对话意图识别](https://aistudio.baidu.com/aistudio/projectdetail/2017202?contributionType=1)
## 参考文献
* Sun Y, Wang S, Feng S, et al. ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation[J]. arXiv preprint arXiv:2107.02137, 2021.
* Su W, Chen X, Feng S, et al. ERNIE-Tiny: A Progressive Distillation Framework for Pretrained Transformer Compression[J]. arXiv preprint arXiv:2106.02241, 2021.
* Wang S, Sun Y, Xiang Y, et al. ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation[J]. arXiv preprint arXiv:2112.12731, 2021. |
huggingtweets/acrasials_art | huggingtweets | 2022-07-26T14:30:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T14:29:17Z | ---
language: en
thumbnail: http://www.huggingtweets.com/acrasials_art/1658845828038/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1459339266060918789/mjxa2TwP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Acrasial! 🫡</div>
<div style="text-align: center; font-size: 14px;">@acrasials_art</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Acrasial! 🫡.
| Data | Acrasial! 🫡 |
| --- | --- |
| Tweets downloaded | 3235 |
| Retweets | 1321 |
| Short tweets | 492 |
| Tweets kept | 1422 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3imbmus0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @acrasials_art's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/asit6thi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/asit6thi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/acrasials_art')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AlexChe/Reinforce-1 | AlexChe | 2022-07-26T14:12:15Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-26T14:12:08Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- metrics:
- type: mean_reward
value: 11.40 +/- 7.09
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
SummerChiam/rust_image_classification_10 | SummerChiam | 2022-07-26T14:07:46Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-07-26T14:07:33Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rust_image_classification_4
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9417721629142761
---
# rust_image_classification_4
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### nonrust

#### rust
 |
AlexChe/Reinforce-0 | AlexChe | 2022-07-26T13:46:20Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-26T13:46:10Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-0
results:
- metrics:
- type: mean_reward
value: 56.30 +/- 17.82
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
nlp-esg-scoring/bert-base-finetuned-esg-snpcsr-clean | nlp-esg-scoring | 2022-07-26T13:15:36Z | 4 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-25T01:50:47Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: nlp-esg-scoring/bert-base-finetuned-esg-snpcsr-clean
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-esg-snpcsr-clean
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4074
- Validation Loss: 2.2353
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4095 | 2.2167 | 0 |
| 2.4085 | 2.2081 | 1 |
| 2.4117 | 2.2194 | 2 |
| 2.4127 | 2.2173 | 3 |
| 2.4063 | 2.2011 | 4 |
| 2.4114 | 2.2102 | 5 |
| 2.4177 | 2.2123 | 6 |
| 2.4102 | 2.2174 | 7 |
| 2.4096 | 2.2211 | 8 |
| 2.4074 | 2.2353 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
r3sist/ppo-LunarLander-v2 | r3sist | 2022-07-26T12:44:40Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-26T12:44:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 68.28 +/- 110.38
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
azizkh/autotrain-j-multi-classification-1181044057 | azizkh | 2022-07-26T11:34:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"ar",
"dataset:azizkh/autotrain-data-j-multi-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-26T11:33:01Z | ---
tags: autotrain
language: ar
widget:
- text: "I love AutoTrain 🤗"
datasets:
- azizkh/autotrain-data-j-multi-classification
co2_eq_emissions: 1.2309703499286417
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1181044057
- CO2 Emissions (in grams): 1.2309703499286417
## Validation Metrics
- Loss: 0.896309494972229
- Accuracy: 0.7192982456140351
- Macro F1: 0.5870079610791685
- Micro F1: 0.7192982456140351
- Weighted F1: 0.719743631524632
- Macro Precision: 0.6779761904761905
- Micro Precision: 0.7192982456140351
- Weighted Precision: 0.8012949039264828
- Macro Recall: 0.5941468253968254
- Micro Recall: 0.7192982456140351
- Weighted Recall: 0.7192982456140351
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/azizkh/autotrain-j-multi-classification-1181044057
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("azizkh/autotrain-j-multi-classification-1181044057", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("azizkh/autotrain-j-multi-classification-1181044057", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
FAICAM/wav2vec2-base-timit-demo-google-colab | FAICAM | 2022-07-26T11:07:42Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-26T07:49:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5725
- Wer: 0.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.508 | 1.0 | 500 | 1.9315 | 0.9962 |
| 0.8832 | 2.01 | 1000 | 0.5552 | 0.5191 |
| 0.4381 | 3.01 | 1500 | 0.4451 | 0.4574 |
| 0.2983 | 4.02 | 2000 | 0.4096 | 0.4265 |
| 0.2232 | 5.02 | 2500 | 0.4280 | 0.4083 |
| 0.1811 | 6.02 | 3000 | 0.4307 | 0.3942 |
| 0.1548 | 7.03 | 3500 | 0.4453 | 0.3889 |
| 0.1367 | 8.03 | 4000 | 0.5043 | 0.4138 |
| 0.1238 | 9.04 | 4500 | 0.4530 | 0.3807 |
| 0.1072 | 10.04 | 5000 | 0.4435 | 0.3660 |
| 0.0978 | 11.04 | 5500 | 0.4739 | 0.3676 |
| 0.0887 | 12.05 | 6000 | 0.5052 | 0.3761 |
| 0.0813 | 13.05 | 6500 | 0.5098 | 0.3619 |
| 0.0741 | 14.06 | 7000 | 0.4666 | 0.3602 |
| 0.0654 | 15.06 | 7500 | 0.5642 | 0.3657 |
| 0.0589 | 16.06 | 8000 | 0.5489 | 0.3638 |
| 0.0559 | 17.07 | 8500 | 0.5260 | 0.3598 |
| 0.0562 | 18.07 | 9000 | 0.5250 | 0.3640 |
| 0.0448 | 19.08 | 9500 | 0.5215 | 0.3569 |
| 0.0436 | 20.08 | 10000 | 0.5117 | 0.3560 |
| 0.0412 | 21.08 | 10500 | 0.4910 | 0.3570 |
| 0.0336 | 22.09 | 11000 | 0.5221 | 0.3524 |
| 0.031 | 23.09 | 11500 | 0.5278 | 0.3480 |
| 0.0339 | 24.1 | 12000 | 0.5353 | 0.3486 |
| 0.0278 | 25.1 | 12500 | 0.5342 | 0.3462 |
| 0.0251 | 26.1 | 13000 | 0.5399 | 0.3439 |
| 0.0242 | 27.11 | 13500 | 0.5626 | 0.3431 |
| 0.0214 | 28.11 | 14000 | 0.5749 | 0.3408 |
| 0.0216 | 29.12 | 14500 | 0.5725 | 0.3413 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
mhaegeman/wav2vec2-large-xls-r-300m-dutch | mhaegeman | 2022-07-26T11:03:34Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-22T12:00:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-dutch-V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-dutch-V2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4262
- eval_wer: 0.3052
- eval_runtime: 8417.9087
- eval_samples_per_second: 0.678
- eval_steps_per_second: 0.085
- epoch: 5.33
- step: 2400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
inokufu/flaubert-base-uncased-xnli-sts-finetuned-education | inokufu | 2022-07-26T10:59:20Z | 9 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"flaubert",
"feature-extraction",
"sentence-similarity",
"transformers",
"Education",
"fr",
"xnli",
"stsb_multi_mt",
"dataset:xnli",
"dataset:stsb_multi_mt",
"arxiv:1810.04805",
"arxiv:1809.05053",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
language: fr
tags:
- sentence-similarity
- transformers
- Education
- fr
- flaubert
- sentence-transformers
- feature-extraction
- xnli
- stsb_multi_mt
datasets:
- xnli
- stsb_multi_mt
---
# inokufu/bertheo
A [sentence-transformers](https://www.SBERT.net) model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Details
This model is based on the French flaubert-base-uncased pre-trained model [1, 2].
It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences.
It was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).
It was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences.
This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Apprendre le python", "Devenir expert en comptabilité"]
model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Apprendre le python", "Devenir expert en comptabilité"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education')
model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
STS (fr) score: 83.05%
## Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## References
[1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br>
[2] https://huggingface.co/flaubert/flaubert_base_uncased <br>
[3] https://arxiv.org/abs/1810.04805 <br>
[4] https://arxiv.org/abs/1809.05053 <br>
[5] https://huggingface.co/datasets/stsb_multi_mt <br>
|
chainyo/sklearn-digits | chainyo | 2022-07-26T09:36:19Z | 0 | 0 | null | [
"PyTorch",
"CNN",
"dataset:sklearn-digits",
"region:us"
]
| null | 2022-07-25T14:29:37Z | ---
tags:
- PyTorch
- CNN
datasets:
- sklearn-digits
---
Basic TinyCNN PyTorch model trained on Sklearn Digits dataset.
```python
"""
Credits to Zama.ai - https://github.com/zama-ai/concrete-ml/blob/main/docs/user/advanced_examples/ConvolutionalNeuralNetwork.ipynb
"""
import numpy as np
import torch
from torch import nn
from torch.nn.utils import prune
class TinyCNN(nn.Module):
"""A very small CNN to classify the sklearn digits dataset.
This class also allows pruning to a maximum of 10 active neurons, which
should help keep the accumulator bit width low.
"""
def __init__(self, n_classes) -> None:
"""Construct the CNN with a configurable number of classes."""
super().__init__()
# This network has a total complexity of 1216 MAC
self.conv1 = nn.Conv2d(1, 2, 3, stride=1, padding=0)
self.conv2 = nn.Conv2d(2, 3, 3, stride=2, padding=0)
self.conv3 = nn.Conv2d(3, 16, 2, stride=1, padding=0)
self.fc1 = nn.Linear(16, n_classes)
# Enable pruning, prepared for training
self.toggle_pruning(True)
def toggle_pruning(self, enable):
"""Enables or removes pruning."""
# Maximum number of active neurons (i.e. corresponding weight != 0)
n_active = 10
# Go through all the convolution layers
for layer in (self.conv1, self.conv2, self.conv3):
s = layer.weight.shape
# Compute fan-in (number of inputs to a neuron)
# and fan-out (number of neurons in the layer)
st = [s[0], np.prod(s[1:])]
# The number of input neurons (fan-in) is the product of
# the kernel width x height x inChannels.
if st[1] > n_active:
if enable:
# This will create a forward hook to create a mask tensor that is multiplied
# with the weights during forward. The mask will contain 0s or 1s
prune.l1_unstructured(layer, "weight", (st[1] - n_active) * st[0])
else:
# When disabling pruning, the mask is multiplied with the weights
# and the result is stored in the weights member
prune.remove(layer, "weight")
def forward(self, x):
"""Run inference on the tiny CNN, apply the decision layer on the reshaped conv output."""
x = self.conv1(x)
x = torch.relu(x)
x = self.conv2(x)
x = torch.relu(x)
x = self.conv3(x)
x = torch.relu(x)
x = x.view(-1, 16)
x = self.fc1(x)
return x
```
|
bigscience/dechonk-logs-2 | bigscience | 2022-07-26T09:29:15Z | 0 | 0 | null | [
"tensorboard",
"region:us"
]
| null | 2022-06-22T06:54:49Z | ### Comparison of downsampling methods after 2.5B tokens
|
Frikallo/Dodo82J | Frikallo | 2022-07-26T08:24:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T08:23:37Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Dodo82J
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Dodo82J
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 3064995158
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
th1s1s1t/dqn-SpaceInvadersNoFrameskip-v1 | th1s1s1t | 2022-07-26T08:15:49Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-26T08:15:11Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 600.00 +/- 193.08
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga th1s1s1t -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga th1s1s1t
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
rajat99/Fine_Tuning_XLSR_300M_testing_6_model | rajat99 | 2022-07-26T07:16:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-26T06:03:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Fine_Tuning_XLSR_300M_testing_6_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tuning_XLSR_300M_testing_6_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2263
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.466 | 23.53 | 400 | 3.2263 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Frikallo/output | Frikallo | 2022-07-26T07:08:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T07:05:18Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001372
- train_batch_size: 1
- eval_batch_size: 8
- seed: 2811898863
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
codingJacob/distilbert-base-uncased-finetuned-ner | codingJacob | 2022-07-26T06:35:39Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metric:
name: Accuracy
type: accuracy
value: 0.9843042559613643
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9272
- Recall: 0.9382
- F1: 0.9327
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2432 | 1.0 | 878 | 0.0689 | 0.9132 | 0.9203 | 0.9168 | 0.9813 |
| 0.0507 | 2.0 | 1756 | 0.0608 | 0.9208 | 0.9346 | 0.9276 | 0.9835 |
| 0.03 | 3.0 | 2634 | 0.0611 | 0.9272 | 0.9382 | 0.9327 | 0.9843 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
huggingtweets/vithederg | huggingtweets | 2022-07-26T06:11:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-26T06:09:27Z | ---
language: en
thumbnail: http://www.huggingtweets.com/vithederg/1658815905698/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547564667320487937/0S_fp5iq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">vi✧ (#SaveWingsOfFire)</div>
<div style="text-align: center; font-size: 14px;">@vithederg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from vi✧ (#SaveWingsOfFire).
| Data | vi✧ (#SaveWingsOfFire) |
| --- | --- |
| Tweets downloaded | 3217 |
| Retweets | 2618 |
| Short tweets | 68 |
| Tweets kept | 531 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3lq9tppb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vithederg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2bwbzsrm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2bwbzsrm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vithederg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ArthurZ/jukebox-5b-lyrics | ArthurZ | 2022-07-26T06:02:43Z | 27 | 9 | transformers | [
"transformers",
"pytorch",
"jukebox",
"feature-extraction",
"MusicGeneration",
"en",
"arxiv:2005.00341",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-06-29T17:51:20Z | ---
language:
- en
tags:
- MusicGeneration
---
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Jukebox
## Overview
The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf)
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever.
This model proposes a generative music model which can be produce minute long samples which can bne conditionned on
artist, genre and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates
music with singing in the raw audio domain. We
tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes,
and modeling those using autoregressive Transformers. We show that the combined model at
scale can generate high-fidelity and diverse songs
with coherence up to multiple minutes. We can
condition on artist and genre to steer the musical
and vocal style, and on unaligned lyrics to make
the singing more controllable. We are releasing
thousands of non cherry-picked samples, along
with model weights and code.
Tips:
This model is very slow for now, and takes 18h to generate a minute long audio.
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/openai/jukebox).
|
NimaBoscarino/unicorn_track_r50_mask | NimaBoscarino | 2022-07-26T05:21:52Z | 0 | 0 | null | [
"object-detection",
"object-tracking",
"video",
"video-object-segmentation",
"arxiv:2111.12085",
"license:mit",
"region:us"
]
| object-detection | 2022-07-26T05:16:06Z | ---
license: mit
tags:
- object-detection
- object-tracking
- video
- video-object-segmentation
inference: false
---
# unicorn_track_r50_mask
## Table of Contents
- [unicorn_track_r50_mask](#-model_id--defaultmymodelname-true)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Evaluation Results](#evaluation-results)
<model_details>
## Model Details
Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 800x1280.
- License: This model is licensed under the MIT license
- Resources for more information:
- [Research Paper](https://arxiv.org/abs/2111.12085)
- [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn)
</model_details>
<uses>
## Uses
#### Direct Use
This model can be used for:
* Single Object Tracking (SOT)
* Multiple Object Tracking (MOT)
* Video Object Segmentation (VOS)
* Multi-Object Tracking and Segmentation (MOTS)
<Eval_Results>
## Evaluation Results
LaSOT AUC (%): 65.3
BDD100K mMOTA (%): 35.1
DAVIS17 J&F (%): 66.2
BDD100K MOTS mMOTSA (%): 30.8
</Eval_Results>
<Cite>
## Citation Information
```bibtex
@inproceedings{unicorn,
title={Towards Grand Unification of Object Tracking},
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={ECCV},
year={2022}
}
```
</Cite> |
NimaBoscarino/unicorn_track_tiny_rt_mask | NimaBoscarino | 2022-07-26T05:21:24Z | 0 | 0 | null | [
"object-detection",
"object-tracking",
"video",
"video-object-segmentation",
"arxiv:2111.12085",
"license:mit",
"region:us"
]
| object-detection | 2022-07-19T07:59:43Z | ---
license: mit
tags:
- object-detection
- object-tracking
- video
- video-object-segmentation
inference: false
---
# unicorn_track_tiny_rt_mask
## Table of Contents
- [unicorn_track_tiny_rt_mask](#-model_id--defaultmymodelname-true)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Evaluation Results](#evaluation-results)
<model_details>
## Model Details
Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 640x1024.
- License: This model is licensed under the MIT license
- Resources for more information:
- [Research Paper](https://arxiv.org/abs/2111.12085)
- [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn)
</model_details>
<uses>
## Uses
#### Direct Use
This model can be used for:
* Single Object Tracking (SOT)
* Multiple Object Tracking (MOT)
* Video Object Segmentation (VOS)
* Multi-Object Tracking and Segmentation (MOTS)
<Eval_Results>
## Evaluation Results
LaSOT AUC (%): 67.1
BDD100K mMOTA (%): 37.5
DAVIS17 J&F (%): 66.8
BDD100K MOTS mMOTSA (%): 26.2
</Eval_Results>
<Cite>
## Citation Information
```bibtex
@inproceedings{unicorn,
title={Towards Grand Unification of Object Tracking},
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={ECCV},
year={2022}
}
```
</Cite> |
NimaBoscarino/unicorn_track_large_mot_challenge_mask | NimaBoscarino | 2022-07-26T05:20:37Z | 0 | 0 | null | [
"object-detection",
"object-tracking",
"video",
"video-object-segmentation",
"arxiv:2111.12085",
"license:mit",
"region:us"
]
| object-detection | 2022-07-26T05:18:07Z | ---
license: mit
tags:
- object-detection
- object-tracking
- video
- video-object-segmentation
inference: false
---
# unicorn_track_large_mot_challenge_mask
## Table of Contents
- [unicorn_track_large_mot_challenge_mask](#-model_id--defaultmymodelname-true)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Evaluation Results](#evaluation-results)
<model_details>
## Model Details
Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 800x1280.
- License: This model is licensed under the MIT license
- Resources for more information:
- [Research Paper](https://arxiv.org/abs/2111.12085)
- [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn)
</model_details>
<uses>
## Uses
#### Direct Use
This model can be used for:
* Single Object Tracking (SOT)
* Multiple Object Tracking (MOT)
* Video Object Segmentation (VOS)
* Multi-Object Tracking and Segmentation (MOTS)
This model can simultaneously deal with SOT, MOT17, VOS, and MOTS Challenge
<Eval_Results>
## Evaluation Results
MOT17 MOTA (%): 77.2
MOTS sMOTSA (%): 65.3
</Eval_Results>
<Cite>
## Citation Information
```bibtex
@inproceedings{unicorn,
title={Towards Grand Unification of Object Tracking},
author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan},
booktitle={ECCV},
year={2022}
}
```
</Cite> |
jaeyeon/korean-aihub-learning-math-1-test | jaeyeon | 2022-07-26T03:41:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T09:41:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: korean-aihub-learning-math-1-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# korean-aihub-learning-math-1-test
This model is a fine-tuned version of [kresnik/wav2vec2-large-xlsr-korean](https://huggingface.co/kresnik/wav2vec2-large-xlsr-korean) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2537
- Wer: 0.4765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 35 | 29.8031 | 1.0 |
| No log | 2.0 | 70 | 5.7158 | 1.0 |
| 19.8789 | 3.0 | 105 | 4.5005 | 1.0 |
| 19.8789 | 4.0 | 140 | 4.3677 | 0.9984 |
| 19.8789 | 5.0 | 175 | 3.8013 | 0.9882 |
| 3.9785 | 6.0 | 210 | 2.4132 | 0.8730 |
| 3.9785 | 7.0 | 245 | 1.5867 | 0.7045 |
| 3.9785 | 8.0 | 280 | 1.3179 | 0.6082 |
| 1.2266 | 9.0 | 315 | 1.2431 | 0.6066 |
| 1.2266 | 10.0 | 350 | 1.1791 | 0.5384 |
| 1.2266 | 11.0 | 385 | 1.0994 | 0.5298 |
| 0.3916 | 12.0 | 420 | 1.1552 | 0.5196 |
| 0.3916 | 13.0 | 455 | 1.1495 | 0.5486 |
| 0.3916 | 14.0 | 490 | 1.1340 | 0.5290 |
| 0.2488 | 15.0 | 525 | 1.2208 | 0.5525 |
| 0.2488 | 16.0 | 560 | 1.1682 | 0.5024 |
| 0.2488 | 17.0 | 595 | 1.1479 | 0.5008 |
| 0.1907 | 18.0 | 630 | 1.1735 | 0.4882 |
| 0.1907 | 19.0 | 665 | 1.2302 | 0.4914 |
| 0.1461 | 20.0 | 700 | 1.2497 | 0.4890 |
| 0.1461 | 21.0 | 735 | 1.2434 | 0.4914 |
| 0.1461 | 22.0 | 770 | 1.2031 | 0.5031 |
| 0.1147 | 23.0 | 805 | 1.2451 | 0.4976 |
| 0.1147 | 24.0 | 840 | 1.2746 | 0.4937 |
| 0.1147 | 25.0 | 875 | 1.2405 | 0.4828 |
| 0.0892 | 26.0 | 910 | 1.2228 | 0.4929 |
| 0.0892 | 27.0 | 945 | 1.2642 | 0.4898 |
| 0.0892 | 28.0 | 980 | 1.2586 | 0.4843 |
| 0.0709 | 29.0 | 1015 | 1.2518 | 0.4788 |
| 0.0709 | 30.0 | 1050 | 1.2537 | 0.4765 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
BigSalmon/GPTNeo350MInformalToFormalLincoln8 | BigSalmon | 2022-07-26T01:39:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-06-12T22:48:09Z | Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln8")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln8")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
``` |
mbartolo/roberta-large-synqa | mbartolo | 2022-07-25T23:36:39Z | 12 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"en",
"dataset:adversarial_qa",
"dataset:mbartolo/synQA",
"dataset:squad",
"arxiv:2002.00293",
"arxiv:2104.08678",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- question-answering
license: apache-2.0
datasets:
- adversarial_qa
- mbartolo/synQA
- squad
metrics:
- exact_match
- f1
model-index:
- name: mbartolo/roberta-large-synqa
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 89.6529
verified: true
- name: F1
type: f1
value: 94.8172
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 55.3333
verified: true
- name: F1
type: f1
value: 66.7464
verified: true
---
# Model Overview
This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
# Data
Training data: SQuAD + AdversarialQA
Evaluation data: SQuAD + AdversarialQA
# Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
# Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details. |
huggingtweets/fireship_dev-hacksultan-prathkum | huggingtweets | 2022-07-25T23:02:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-07-25T23:02:23Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1547203581366874113/OW-xVizu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1451624172266868739/lpi5wPb4_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1436819851566219267/HEffZjvP_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pratham & Name cannot be blank & Fireship</div>
<div style="text-align: center; font-size: 14px;">@fireship_dev-hacksultan-prathkum</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pratham & Name cannot be blank & Fireship.
| Data | Pratham | Name cannot be blank | Fireship |
| --- | --- | --- | --- |
| Tweets downloaded | 3247 | 3242 | 2081 |
| Retweets | 650 | 598 | 721 |
| Short tweets | 252 | 605 | 114 |
| Tweets kept | 2345 | 2039 | 1246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rmu05er/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fireship_dev-hacksultan-prathkum's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2qzxq4v7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2qzxq4v7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fireship_dev-hacksultan-prathkum')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ultra-coder54732/xlnet-prop-16-train-set | ultra-coder54732 | 2022-07-25T22:48:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-25T20:29:31Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-prop-16-train-set
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-prop-16-train-set
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cpu
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-8_female-2_s659 | jonatasgrosman | 2022-07-25T22:36:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T22:35:54Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-8_female-2_s659
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s886 | jonatasgrosman | 2022-07-25T22:26:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T22:26:39Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s886
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295 | jonatasgrosman | 2022-07-25T22:17:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T22:16:54Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s825 | jonatasgrosman | 2022-07-25T22:12:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T22:12:04Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-10_female-0_s825
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s577 | jonatasgrosman | 2022-07-25T22:07:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T22:07:13Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-10_female-0_s577
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-10_female-0_s559 | jonatasgrosman | 2022-07-25T22:02:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T22:02:16Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-10_female-0_s559
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
heriosousa/Reinforce-Pong-PLE-v0 | heriosousa | 2022-07-25T21:53:53Z | 0 | 0 | null | [
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-25T21:53:44Z | ---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pong-PLE-v0
results:
- metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-0_female-10_s534 | jonatasgrosman | 2022-07-25T21:52:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T21:52:18Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-0_female-10_s534
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s779 | jonatasgrosman | 2022-07-25T21:38:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T21:38:18Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_gender_male-5_female-5_s779
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s543 | jonatasgrosman | 2022-07-25T21:28:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T21:28:16Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-8_belgium-2_s543
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
enoriega/rule_learning_margin_3mm_many_negatives_spanpred_attention | enoriega | 2022-07-25T21:21:23Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"endpoints_compatible",
"region:us"
]
| null | 2022-07-23T01:01:07Z | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_3mm_many_negatives_spanpred_attention
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_3mm_many_negatives_spanpred_attention
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2196
- Margin Accuracy: 0.8969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.3149 | 0.16 | 60 | 0.3098 | 0.8608 |
| 0.2754 | 0.32 | 120 | 0.2725 | 0.8733 |
| 0.2619 | 0.48 | 180 | 0.2512 | 0.8872 |
| 0.2378 | 0.64 | 240 | 0.2391 | 0.8925 |
| 0.2451 | 0.8 | 300 | 0.2305 | 0.8943 |
| 0.2357 | 0.96 | 360 | 0.2292 | 0.8949 |
| 0.2335 | 1.12 | 420 | 0.2269 | 0.8952 |
| 0.2403 | 1.28 | 480 | 0.2213 | 0.8957 |
| 0.2302 | 1.44 | 540 | 0.2227 | 0.8963 |
| 0.2353 | 1.6 | 600 | 0.2222 | 0.8961 |
| 0.2271 | 1.76 | 660 | 0.2207 | 0.8964 |
| 0.228 | 1.92 | 720 | 0.2218 | 0.8967 |
| 0.2231 | 2.08 | 780 | 0.2201 | 0.8967 |
| 0.2128 | 2.24 | 840 | 0.2219 | 0.8967 |
| 0.2186 | 2.4 | 900 | 0.2202 | 0.8967 |
| 0.2245 | 2.56 | 960 | 0.2205 | 0.8969 |
| 0.2158 | 2.72 | 1020 | 0.2196 | 0.8969 |
| 0.2106 | 2.88 | 1080 | 0.2192 | 0.8968 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
AdoubleLen/Reinforce | AdoubleLen | 2022-07-25T21:10:49Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-07-25T20:48:47Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- metrics:
- type: mean_reward
value: 93.20 +/- 24.82
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587 | jonatasgrosman | 2022-07-25T21:09:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T21:09:10Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s587
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s55 | jonatasgrosman | 2022-07-25T21:04:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T21:04:25Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-2_belgium-8_s55
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s673 | jonatasgrosman | 2022-07-25T21:00:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T20:59:56Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s673
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s350 | jonatasgrosman | 2022-07-25T20:50:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T20:50:09Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-10_belgium-0_s350
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s376 | jonatasgrosman | 2022-07-25T20:40:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-07-25T20:40:20Z | ---
language:
- fr
license: apache-2.0
tags:
- automatic-speech-recognition
- fr
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2r_fr_xls-r_accent_france-0_belgium-10_s376
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Subsets and Splits