modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 12:28:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 477
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 12:27:14
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fadhilarkn/distilbert-base-uncased-finetuned-ner | fadhilarkn | 2022-07-17T09:45:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-17T09:36:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9276948590381426
- name: Recall
type: recall
value: 0.9386956035350711
- name: F1
type: f1
value: 0.9331628113879005
- name: Accuracy
type: accuracy
value: 0.9842883695807584
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0574
- Precision: 0.9277
- Recall: 0.9387
- F1: 0.9332
- Accuracy: 0.9843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2384 | 1.0 | 878 | 0.0701 | 0.9130 | 0.9220 | 0.9175 | 0.9803 |
| 0.0494 | 2.0 | 1756 | 0.0593 | 0.9222 | 0.9314 | 0.9268 | 0.9829 |
| 0.0301 | 3.0 | 2634 | 0.0574 | 0.9277 | 0.9387 | 0.9332 | 0.9843 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tanfiona/unicausal-tok-baseline | tanfiona | 2022-07-17T07:21:25Z | 10,555 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-17T07:06:08Z | ---
language: en
license: unknown
widget:
- text: "She fell because he pushed her ."
example_title: "Causal Example 1"
- text: "He pushed her , causing her to fall."
example_title: "Causal Example 2"
---
Cause-effect span detection for causal sequences:
```label_to_id = {'B-C': 0, 'B-E': 1, 'I-C': 2, 'I-E': 3, 'O': 4}```
* LABEL_0 = B-C
* LABEL_1 = B-E
* LABEL_2 = I-C
* LABEL_3 = I-E
* LABEL_4 = O
Trained on multiple datasets. |
tanfiona/unicausal-pair-baseline | tanfiona | 2022-07-17T07:17:09Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-17T07:11:17Z | ---
language: en
license: unknown
widget:
- text: "<ARG1>She fell</ARG1> because <ARG0>he pushed her</ARG0> ."
example_title: "Causal Example 1"
- text: "<ARG0>He pushed her</ARG0> , <ARG1>causing her to fall</ARG1>."
example_title: "Causal Example 2"
- text: "<ARG0>She fell</ARG0> because <ARG1>he pushed her</ARG1> ."
example_title: "Non-causal Example 1"
- text: "<ARG1>He is Billy</ARG1> and <ARG0>he pushed her</ARG0>."
example_title: "Non-causal Example 2"
---
Binary causal sentence classification with argument prompts:
* LABEL_0 = Non-causal
* LABEL_1 = Causal (ARG0 causes ARG1)
Trained on multiple datasets.
For Causal sequences, try swapping the arguments to observe the prediction results. |
micheljperez/testpyramidsrnd2 | micheljperez | 2022-07-17T04:30:25Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2022-07-17T04:30:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: micheljperez/testpyramidsrnd2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Retrial9842/ppo-LunarLander-v2 | Retrial9842 | 2022-07-17T03:39:15Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-17T03:38:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 218.99 +/- 76.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ebelenwaf/canbert | ebelenwaf | 2022-07-17T03:39:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-04T05:15:34Z | ---
tags:
- generated_from_trainer
model-index:
- name: canbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canbert
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
micheljperez/testpyramidsrnd | micheljperez | 2022-07-17T03:09:43Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2022-07-17T03:09:37Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: micheljperez/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Konstantine4096/bart-pizza-5K | Konstantine4096 | 2022-07-16T22:26:21Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-16T20:55:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-pizza-5K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-pizza-5K
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0171 | 1.6 | 500 | 0.1688 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aalbertini1990/autotrain-first-test-html-1136241677 | aalbertini1990 | 2022-07-16T21:16:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain",
"en",
"dataset:aalbertini1990/autotrain-data-first-test-html",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-15T12:46:14Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aalbertini1990/autotrain-data-first-test-html
co2_eq_emissions: 19.49742293318862
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1136241677
- CO2 Emissions (in grams): 19.49742293318862
## Validation Metrics
- Loss: 0.18860992789268494
- Rouge1: 84.2283
- Rouge2: 80.2825
- RougeL: 83.9066
- RougeLsum: 83.9129
- Gen Len: 58.3175
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aalbertini1990/autotrain-first-test-html-1136241677
``` |
Tstarshak/dqn-SpaceInvadersNoFrameskip-v4 | Tstarshak | 2022-07-16T21:07:40Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-16T21:07:00Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 655.50 +/- 310.07
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Tstarshak -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Tstarshak
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
solve/wav2vec2-base-timit-demo-sol | solve | 2022-07-16T19:27:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-02T12:12:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-sol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-sol
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3922
- Wer: 0.2862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6222 | 6.85 | 500 | 1.5843 | 0.9627 |
| 0.509 | 13.7 | 1000 | 0.4149 | 0.3417 |
| 0.1221 | 20.55 | 1500 | 0.3692 | 0.2992 |
| 0.0618 | 27.4 | 2000 | 0.3922 | 0.2862 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.12.1
|
hugginglearners/malayalam-ULMFit-Seq2Seq | hugginglearners | 2022-07-16T17:59:50Z | 0 | 1 | fastai | [
"fastai",
"text-translation",
"ml",
"region:us"
] | null | 2022-07-16T11:01:26Z | ---
tags:
- fastai
- text-translation
language: ml
widget:
- text: "കേൾക്കുന്ന എല്ലാ കാര്യങ്ങളും എനിക്കു മനസിലായില്ല"
example_title: "Malayalam Seq2Seq translation"
---
# മലയാളം - English ULMFit translationmodel. (Working in Progress)
[](https://www.kaggle.com/code/rajeshradhakrishnan/ml-ulmfit-seq2seq-translation)
---
# malayalam-ULMFit-Seq2Seq (Traslation model)
malayalam-ULMFit-Seq2Seq model is pre-trained on [Malyalam_Language_Model_ULMFiT](https://github.com/goru001/nlp-for-malyalam/blob/master/language-model/Malyalam_Language_Model_ULMFiT.ipynb) using [fastai](https://docs.fast.ai/text.data.html) Language Model using fastai
Tokenized using Sentencepiece with a vocab size of 10000 the language model is upload to [kaggle dataset](https://www.kaggle.com/datasets/rajeshradhakrishnan/ulmfit-fastai)
## Usage
```
!pip install -Uqq huggingface_hub["fastai"]
from huggingface_hub import from_pretrained_fastai
learner = from_pretrained_fastai(repo_id)
original_xtext = 'കേൾക്കുന്ന എല്ലാ കാര്യങ്ങളും എനിക്കു മനസിലായില്ല'
original_ytext = 'I didnt understand all this'
predicted_text = learner.predict(original_xtext)
print(f'original text: {original_xtext}')
print(f'original answer: {original_ytext}')
print(f'predicted text: {predicted_text}')
```
## Intended uses & limitations
It's not fine tuned to the state of the art accuracy
## Training and evaluation data
[Malayalam Samanantar Dataset - uploaded to kaggle with english - malayalam ](https://www.kaggle.com/datasets/rajeshradhakrishnan/ulmfit-fastai)
|
Konstantine4096/bart-pizza | Konstantine4096 | 2022-07-16T17:17:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-16T17:07:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-pizza
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-pizza
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
KoichiYasuoka/roberta-base-thai-spm | KoichiYasuoka | 2022-07-16T15:48:22Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"thai",
"masked-lm",
"wikipedia",
"th",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
language:
- "th"
tags:
- "thai"
- "masked-lm"
- "wikipedia"
license: "apache-2.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# roberta-base-thai-spm
## Model Description
This is a RoBERTa model pre-trained on Thai Wikipedia texts. You can fine-tune `roberta-base-thai-spm` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-thai-spm")
```
|
RobertoFont/pegasus-large-samsum | RobertoFont | 2022-07-16T15:12:09Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-16T11:45:18Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: pegasus-large-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 48.0968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-large-samsum
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4109
- Rouge1: 48.0968
- Rouge2: 24.6663
- Rougel: 40.2569
- Rougelsum: 44.0137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 230 | 1.4646 | 45.0631 | 22.5567 | 38.0518 | 41.2694 |
| No log | 2.0 | 460 | 1.4203 | 47.4122 | 24.158 | 39.7414 | 43.3485 |
| 1.699 | 3.0 | 690 | 1.4109 | 48.0968 | 24.6663 | 40.2569 | 44.0137 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
chancar/distilbert-base-uncased-finetuned-ner | chancar | 2022-07-16T14:11:56Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-05-10T15:15:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9780
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.7891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.9591 | 1.0 | 878 | 0.9780 | 0.0 | 0.0 | 0.0 | 0.7891 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Tokenizers 0.12.1
|
yuanzf/timemachine | yuanzf | 2022-07-16T12:07:03Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-07-16T11:11:44Z | ---
title: TimeMachine-Visual_Question_Answering
emoji: 🎓
colorFrom: blue
colorTo: pink
sdk: gradio
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
roykoand/ppo-LunarLander-v2.1 | roykoand | 2022-07-16T11:26:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-16T11:25:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 274.13 +/- 12.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sssingh/distilbert-base-uncased-emotion-finetuned | sssingh | 2022-07-16T08:15:11Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-10T09:17:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: distilbert-base-uncased-emotion-finetuned
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9350215566385567
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-emotion-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1518
- Acc: 0.935
- F1: 0.9350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|
| 0.1734 | 1.0 | 250 | 0.1624 | 0.928 | 0.9279 |
| 0.1187 | 2.0 | 500 | 0.1518 | 0.935 | 0.9350 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SiraH/dummy-model | SiraH | 2022-07-16T06:28:46Z | 3 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-16T06:20:36Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Someman/xlm-roberta-base-finetuned-panx-de | Someman | 2022-07-16T05:50:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-16T04:51:21Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8640345886904085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1426
- F1: 0.8640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2525 | 1.0 | 787 | 0.1795 | 0.8184 |
| 0.1283 | 2.0 | 1574 | 0.1402 | 0.8468 |
| 0.08 | 3.0 | 2361 | 0.1426 | 0.8640 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Someman/distilbert-base-uncased-finetuned-emotion | Someman | 2022-07-16T05:49:22Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-30T10:53:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245803802599059
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3083 | 0.9005 | 0.8972 |
| No log | 2.0 | 500 | 0.2186 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jhonparra18/roberta-base-cv-studio_name-medium | jhonparra18 | 2022-07-16T02:43:03Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T21:02:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-cv-studio_name-medium
results: []
widget:
- text: "Egresado de la carrera Ingeniería en Computación Conocimientos de lenguajes HTML, CSS, Javascript y MySQL. Experiencia trabajando en ámbitos de redes de pequeña y mediana escala. Inglés Hablado nivel básico, escrito nivel intermedio.HTML, CSS y JavaScript. Realidad aumentada. Lenguaje R. HTML5, JavaScript y Nodejs"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-cv-studio_name-medium
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
Predicts a studio name based on a CV text
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 10
### Framework versions
- Transformers 4.19.0
- Pytorch 1.8.2+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
pyf98/slurp_entity_branchformer | pyf98 | 2022-07-16T01:45:11Z | 2 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp_entity",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-05-28T00:40:17Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- slurp_entity
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/slurp_entity_branchformer`
This model was trained by Yifan Peng using slurp_entity recipe in [espnet](https://github.com/espnet/espnet/).
Branchformer (Peng et al., ICML 2022): [https://proceedings.mlr.press/v162/peng22a.html](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 55b6cc387fd0252d1a06db2042fd101bcea7bb34
pip install -e .
cd egs2/slurp_entity/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/slurp_entity_branchformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri May 27 03:41:59 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.11.0`
- Git hash: `4f36236ed7c8a25c2f869e518614e1ad4a8b50d6`
- Commit date: `Thu May 26 00:22:45 2022 -0400`
## asr_train_asr_branchformer_e18_d6_size512_lr1e-3_warmup35k_raw_en_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/devel|8690|178058|83.7|7.6|8.8|2.8|19.2|50.5|
|decode_asr_asr_model_valid.acc.ave_10best/test|13078|262176|82.6|7.9|9.5|2.7|20.1|49.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave_10best/devel|8690|847400|90.1|3.0|6.9|3.3|13.2|50.5|
|decode_asr_asr_model_valid.acc.ave_10best/test|13078|1245475|89.0|3.2|7.8|3.1|14.1|49.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_branchformer_e18_d6_size512_lr1e-3_warmup35k.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_branchformer_e18_d6_size512_lr1e-3_warmup35k_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- kaldi_ark
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- kaldi_ark
- - dump/raw/devel/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 35000
token_list:
- <blank>
- <unk>
- ▁SEP
- ▁FILL
- s
- ▁the
- a
- ▁to
- ▁i
- ▁me
- e
- ▁s
- ▁a
- i
- ▁you
- ▁what
- er
- ing
- u
- ▁is
- ''''
- o
- p
- ▁in
- ▁p
- y
- ▁my
- ▁please
- d
- c
- m
- ▁b
- l
- ▁m
- ▁c
- st
- date
- n
- ▁d
- le
- b
- ▁for
- re
- t
- ▁on
- en
- h
- 'on'
- ar
- person
- ▁re
- ▁f
- ▁g
- ▁of
- an
- ▁
- g
- ▁today
- ▁t
- or
- ▁it
- ▁this
- ▁h
- r
- f
- at
- ch
- ce
- place_name
- ▁email
- ▁do
- es
- ri
- ▁e
- ▁w
- ic
- in
- ▁that
- event_name
- ▁play
- ▁and
- al
- ▁n
- ▁can
- email_query
- ve
- ▁new
- day
- it
- ate
- ▁from
- ▁have
- k
- time
- ▁am
- media_type
- email_sendemail
- ent
- ▁olly
- qa_factoid
- se
- v
- et
- ck
- ▁any
- calendar_set
- ly
- th
- ▁how
- ▁meeting
- ed
- ▁tell
- ▁st
- x
- ur
- ro
- ▁at
- nd
- ▁list
- w
- ▁u
- ou
- ▁not
- ▁about
- ▁an
- ▁o
- general_negate
- ut
- ▁time
- ▁be
- ▁ch
- ▁are
- social_post
- business_name
- la
- ty
- play_music
- ot
- general_quirky
- ▁l
- ▁sh
- ▁tweet
- om
- ▁week
- um
- ▁one
- ter
- ▁he
- ▁up
- ▁com
- general_praise
- weather_query
- ▁next
- ▁th
- ▁check
- calendar_query
- ▁last
- ▁ro
- ad
- is
- ▁with
- ay
- ▁send
- pe
- ▁pm
- ▁tomorrow
- ▁j
- un
- ▁train
- general_explain
- ▁v
- one
- ▁r
- ra
- news_query
- ation
- ▁emails
- us
- if
- ct
- ▁co
- ▁add
- ▁will
- ▁se
- nt
- ▁was
- ine
- ▁de
- ▁set
- ▁ex
- ▁would
- ir
- ow
- ber
- general_repeat
- ight
- ook
- ▁again
- ▁song
- currency_name
- ll
- ▁ha
- ▁go
- relation
- te
- ion
- and
- ▁y
- ▁ye
- general_affirm
- general_confirm
- ery
- ▁po
- ff
- ▁we
- ▁turn
- ▁did
- ▁mar
- ▁alarm
- ▁like
- datetime_query
- ers
- ▁all
- ▁remind
- ▁so
- qa_definition
- ▁calendar
- end
- ▁said
- ci
- ▁off
- ▁john
- ▁day
- ss
- pla
- ume
- ▁get
- ail
- pp
- z
- ry
- am
- ▁need
- as
- ▁thank
- ▁wh
- ▁want
- ▁right
- ▁jo
- ▁facebook
- ▁k
- ge
- ld
- ▁fri
- ▁two
- general_dontcare
- ▁news
- ol
- oo
- ant
- ▁five
- ▁event
- ake
- definition_word
- transport_type
- ▁your
- vi
- orn
- op
- ▁weather
- ome
- ▁app
- ▁lo
- de
- ▁music
- weather_descriptor
- ak
- ke
- ▁there
- ▁si
- ▁lights
- ▁now
- ▁mo
- calendar_remove
- our
- ▁dollar
- food_type
- me
- ▁more
- ▁no
- ▁birthday
- orrect
- ▁rep
- ▁show
- play_radio
- ▁mon
- ▁does
- ood
- ag
- li
- ▁sto
- ▁contact
- cket
- email_querycontact
- ▁ev
- ▁could
- ange
- ▁just
- out
- ame
- .
- ▁ja
- ▁confirm
- qa_currency
- ▁man
- ▁late
- ▁think
- ▁some
- timeofday
- ▁bo
- qa_stock
- ong
- ▁start
- ▁work
- ▁ten
- int
- ▁command
- all
- ▁make
- ▁la
- j
- ▁answ
- ▁hour
- ▁cle
- ah
- ▁find
- ▁service
- ▁fa
- qu
- general_commandstop
- ai
- ▁when
- ▁te
- ▁by
- social_query
- ard
- ▁tw
- ul
- id
- ▁seven
- ▁where
- ▁much
- art
- ▁appointment
- ver
- artist_name
- el
- device_type
- ▁know
- ▁three
- ▁events
- ▁tr
- ▁li
- ork
- red
- ect
- ▁let
- ▁respon
- ▁par
- zz
- ▁give
- ▁twenty
- ▁ti
- ▁curre
- play_podcasts
- ▁radio
- cooking_recipe
- transport_query
- ▁con
- gh
- ▁le
- lists_query
- ▁rem
- recommendation_events
- house_place
- alarm_set
- play_audiobook
- ist
- ase
- music_genre
- ive
- ast
- player_setting
- ort
- lly
- news_topic
- list_name
- ▁playlist
- ▁ne
- business_type
- personal_info
- ind
- ust
- di
- ress
- recommendation_locations
- lists_createoradd
- iot_hue_lightoff
- lists_remove
- ord
- ▁light
- ere
- alarm_query
- audio_volume_mute
- music_query
- ▁audio
- rain
- ▁date
- ▁order
- audio_volume_up
- ▁ar
- ▁podcast
- transport_ticket
- mail
- iot_hue_lightchange
- iot_coffee
- radio_name
- ill
- ▁ri
- '@'
- takeaway_query
- song_name
- takeaway_order
- ▁ra
- email_addcontact
- play_game
- book
- transport_traffic
- ▁house
- music_likeness
- her
- transport_taxi
- iot_hue_lightdim
- ment
- ght
- fo
- order_type
- color_type
- '1'
- ven
- ould
- general_joke
- ess
- ain
- qa_maths
- ▁place
- ▁twe
- cast
- iot_cleaning
- ▁che
- ▁cont
- ith
- audiobook_name
- email_address
- game_name
- ▁cal
- general_frequency
- ▁tom
- ▁food
- act
- iot_hue_lightup
- '2'
- alarm_remove
- podcast_descriptor
- ▁definition
- audio_volume_down
- ▁media
- email_folder
- dia
- meal_type
- ▁mus
- recommendation_movies
- ▁ad
- ree
- pt
- now
- playlist_name
- ▁person
- change_amount
- ▁pla
- escri
- datetime_convert
- podcast_name
- ▁ab
- time_zone
- ▁def
- ting
- iot_wemo_on
- music_settings
- iot_wemo_off
- orre
- cy
- ank
- music_descriptor
- lar
- app_name
- row
- joke_type
- xt
- of
- ition
- ▁meet
- ink
- ▁confir
- transport_agency
- general_greet
- ▁business
- ▁art
- ▁ag
- urn
- escript
- rom
- ▁rel
- ▁au
- ▁currency
- audio_volume_other
- iot_hue_lighton
- ▁artist
- '?'
- ▁bus
- cooking_type
- movie_name
- coffee_type
- ingredient
- ather
- music_dislikeness
- sp
- q
- ▁ser
- esc
- ▁bir
- ▁cur
- name
- ▁tran
- ▁hou
- ek
- uch
- ▁conf
- ▁face
- '9'
- ▁birth
- I
- sw
- transport_descriptor
- ▁comm
- lease
- transport_name
- aid
- movie_type
- ▁device
- alarm_type
- audiobook_author
- '5'
- drink_type
- ▁joh
- ▁defin
- word
- ▁curren
- order
- iness
- W
- cooking_query
- sport_type
- ▁relation
- oint
- H
- '8'
- A
- '0'
- ▁dol
- vice
- ▁pers
- '&'
- T
- ▁appoint
- _
- '7'
- '3'
- '-'
- game_type
- ▁pod
- N
- M
- E
- list
- music_album
- dio
- ▁transport
- qa_query
- C
- O
- U
- query_detail
- ']'
- '['
- descriptor
- ':'
- spon
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: null
preencoder_conf: {}
encoder: branchformer
encoder_conf:
output_size: 512
use_attn: true
attention_heads: 8
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
use_cgmlp: true
cgmlp_linear_units: 2048
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
merge_method: concat
cgmlp_weight: 0.5
attn_branch_drop_rate: 0.0
num_blocks: 18
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
stochastic_depth_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
pyf98/aishell_branchformer_fast_selfattn_e24_amp | pyf98 | 2022-07-16T01:44:33Z | 2 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-05-29T00:00:23Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- aishell
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/aishell_branchformer_fast_selfattn_e24_amp`
This model was trained by Yifan Peng using aishell recipe in [espnet](https://github.com/espnet/espnet/).
Branchformer (Peng et al., ICML 2022): [https://proceedings.mlr.press/v162/peng22a.html](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout f04c401b4c2de91c1a2cd8f5c0f6505d2711126f
pip install -e .
cd egs2/aishell/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/aishell_branchformer_fast_selfattn_e24_amp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat May 28 16:09:35 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202205`
- pytorch version: `pytorch 1.11.0`
- Git hash: `69141f66a5f0ff3ca370f6abe5738d33819ff9ab`
- Commit date: `Fri May 27 22:12:20 2022 -0400`
## asr_train_asr_branchformer_fast_selfattn_e24_amp_raw_zh_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam10_ctc0.4/dev|14326|14326|66.7|33.3|0.0|0.0|33.3|33.3|
|beam10_ctc0.4/test|7176|7176|64.8|35.2|0.0|0.0|35.2|35.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam10_ctc0.4/dev|14326|205341|95.8|4.1|0.1|0.1|4.3|33.3|
|beam10_ctc0.4/test|7176|104765|95.5|4.4|0.1|0.1|4.6|35.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_branchformer_fast_selfattn_e24_amp.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_branchformer_fast_selfattn_e24_amp_raw_zh_char_sp
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 49507
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char_sp/train/speech_shape
- exp/asr_stats_raw_zh_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char_sp/valid/speech_shape
- exp/asr_stats_raw_zh_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 35000
token_list:
- <blank>
- <unk>
- 的
- 一
- 在
- 十
- 中
- 是
- 人
- 有
- 二
- 上
- 了
- 不
- 国
- 市
- 大
- 业
- 为
- 年
- 三
- 发
- 个
- 分
- 出
- 会
- 公
- 行
- 地
- 成
- 这
- 和
- 到
- 五
- 产
- 时
- 对
- 房
- 百
- 能
- 场
- 来
- 以
- 新
- 之
- 日
- 者
- 将
- 现
- 四
- 要
- 家
- 资
- 多
- 月
- 也
- 方
- 后
- 机
- 下
- 前
- 零
- 比
- 于
- 生
- 点
- 开
- 动
- 高
- 经
- 进
- 报
- 体
- 赛
- 子
- 万
- 车
- 用
- 金
- 司
- 可
- 被
- 过
- 手
- 本
- 作
- 自
- 全
- 八
- 六
- 最
- 价
- 目
- 电
- 部
- 交
- 九
- 七
- 面
- 我
- 企
- 加
- 小
- 度
- 实
- 同
- 城
- 工
- 其
- 力
- 定
- 而
- 元
- 合
- 已
- 内
- 与
- 法
- 还
- 关
- 网
- 得
- 他
- 就
- 入
- 名
- 品
- 女
- 记
- 理
- 事
- 长
- 两
- 商
- 都
- 们
- 京
- 并
- 但
- 平
- 制
- 保
- 据
- 期
- 化
- 主
- 重
- 表
- 次
- 相
- 量
- 通
- 道
- 政
- 所
- 天
- 第
- 利
- 间
- 海
- 数
- 务
- 提
- 北
- 展
- 员
- 管
- 投
- 因
- 建
- 好
- 外
- 区
- 更
- 示
- 增
- 从
- 计
- 信
- 性
- 等
- 运
- 项
- 应
- 当
- 收
- 位
- 着
- 起
- 学
- 台
- 民
- 持
- 规
- 设
- 明
- 股
- 正
- 没
- 心
- 然
- 很
- 今
- 调
- 去
- 安
- 此
- 东
- 队
- 如
- 线
- 科
- 世
- 无
- 达
- 身
- 果
- 证
- 基
- 受
- 男
- 需
- 标
- 布
- 情
- 格
- 近
- 步
- 未
- 费
- 求
- 式
- 消
- 千
- 美
- 些
- 里
- 米
- 向
- 看
- 续
- 息
- 意
- 接
- 门
- 回
- 及
- 销
- 老
- 获
- 总
- 监
- 打
- 联
- 至
- 亿
- 说
- 讯
- 住
- 环
- 件
- 整
- 水
- 技
- 路
- 院
- 局
- 特
- 该
- 统
- 由
- 售
- 购
- 强
- 改
- 问
- 乐
- 楼
- 涨
- 处
- 决
- 让
- 系
- 户
- 题
- 推
- 少
- 广
- 显
- 降
- 跑
- 影
- 只
- 选
- 称
- 创
- 易
- 战
- 首
- 完
- 案
- 策
- 常
- 查
- 参
- 种
- 牌
- 程
- 银
- 备
- 认
- 营
- 立
- 势
- 结
- 造
- 超
- 己
- 准
- 存
- 险
- 球
- 各
- 代
- 低
- 再
- 做
- 级
- 款
- 放
- 物
- 告
- 原
- 友
- 转
- 警
- 周
- 界
- 张
- 样
- 传
- 较
- 风
- 单
- 给
- 她
- 州
- 解
- 则
- 视
- 指
- 预
- 升
- 华
- 供
- 走
- 每
- 取
- 导
- 搜
- 集
- 文
- 变
- 客
- 排
- 片
- 头
- 任
- 积
- 术
- 率
- 型
- 军
- 斯
- 研
- 别
- 非
- 直
- 智
- 速
- 组
- 星
- 领
- 口
- 份
- 岁
- 马
- 王
- 快
- 专
- 社
- 使
- 团
- 模
- 器
- 难
- 活
- 拉
- 或
- 约
- 施
- 源
- 构
- 支
- 医
- 儿
- 带
- 服
- 先
- 想
- 引
- 么
- 办
- 照
- 狐
- 权
- 微
- 南
- 始
- 融
- 深
- 士
- 游
- 绩
- 仅
- 况
- 媒
- 随
- 半
- 越
- 幅
- 确
- 注
- 类
- 争
- 税
- 限
- 流
- 均
- 控
- 充
- 额
- 望
- 连
- 划
- 奥
- 亚
- 包
- 娱
- 西
- 财
- 值
- 伤
- 某
- 致
- 终
- 空
- 济
- 众
- 际
- 土
- 买
- 仍
- 育
- 师
- 汽
- 知
- 质
- 态
- 具
- 李
- 责
- 究
- 露
- 条
- 几
- 居
- 共
- 响
- 反
- 站
- 冠
- 节
- 季
- 优
- 委
- 宅
- 观
- 互
- 见
- 范
- 境
- 感
- 负
- 段
- 失
- 采
- 套
- 域
- 尔
- 举
- 何
- 光
- 气
- 落
- 博
- 教
- 锦
- 林
- 山
- 依
- 继
- 极
- 形
- 图
- 审
- 竞
- 益
- 断
- 贷
- 效
- 府
- 复
- 许
- 容
- 健
- 击
- 足
- 又
- 诉
- 助
- 孩
- 色
- 停
- 票
- 双
- 拿
- 板
- 松
- 热
- 那
- 把
- 却
- 清
- 刘
- 议
- 考
- 减
- 曾
- 疑
- 例
- 除
- 功
- 占
- 你
- 试
- 根
- 港
- 太
- 离
- 才
- 货
- 突
- 涉
- 且
- 券
- 配
- 盘
- 即
- 库
- 付
- 破
- 职
- 演
- 农
- 置
- 纪
- 论
- 真
- 龙
- 晚
- 装
- 爱
- 号
- 练
- 死
- 压
- 亲
- 严
- 评
- 田
- 话
- 托
- 护
- 火
- 协
- 红
- 江
- 克
- 卖
- 言
- 租
- 善
- 频
- 普
- 飞
- 验
- 补
- 边
- 满
- 象
- 软
- 算
- 遭
- 馀
- 闻
- 稳
- 厂
- 远
- 苹
- 钱
- 担
- 判
- 官
- 虽
- 湾
- 按
- 昨
- 校
- 必
- 园
- 略
- 救
- 希
- 底
- 执
- 够
- 征
- 拍
- 历
- 像
- 润
- 层
- 债
- 便
- 障
- 围
- 康
- 店
- 往
- 列
- 早
- 测
- 录
- 否
- 香
- 宝
- 阳
- 索
- 核
- 兴
- 检
- 状
- 英
- 村
- 料
- 云
- 留
- 夫
- 移
- 奖
- 病
- 临
- 轻
- 省
- 秒
- 激
- 请
- 革
- 属
- 遇
- 跌
- 维
- 批
- 德
- 承
- 端
- 介
- 精
- 夺
- 群
- 初
- 胜
- 卡
- 尽
- 花
- 辆
- 它
- 故
- 神
- 届
- 治
- 透
- 景
- 白
- 副
- 什
- 宣
- 铁
- 杨
- 跳
- 假
- 登
- 福
- 青
- 药
- 婚
- 养
- 幕
- 违
- 短
- 访
- 修
- 纷
- 律
- 左
- 角
- 酒
- 括
- 爆
- 嫌
- 径
- 宁
- 董
- 适
- 逐
- 刚
- 防
- 陈
- 午
- 差
- 庭
- 独
- 波
- 食
- 识
- 似
- 候
- 黄
- 亡
- 训
- 书
- 退
- 待
- 航
- 块
- 冲
- 扩
- 吴
- 甚
- 申
- 伟
- 眼
- 巴
- 觉
- 找
- 换
- 义
- 轮
- 滑
- 席
- 央
- 送
- 右
- 卫
- 乘
- 石
- 字
- 罪
- 罗
- 泳
- 孙
- 析
- 志
- 另
- 母
- 绿
- 抢
- 止
- 令
- 童
- 妈
- 史
- 刑
- 洲
- 述
- 穿
- 念
- 纳
- 损
- 富
- 免
- 毒
- 络
- 紧
- 妻
- 乎
- 豪
- 素
- 害
- 倒
- 吸
- 街
- 促
- 择
- 杀
- 追
- 巨
- 犯
- 声
- 愿
- 晨
- 思
- 谈
- 河
- 镇
- 尼
- 跟
- 庆
- 链
- 措
- 借
- 赔
- 密
- 圳
- 贴
- 苏
- 温
- 骗
- 习
- 摄
- 版
- 帮
- 币
- 阶
- 阿
- 迎
- 驾
- 黑
- 趋
- 县
- 私
- 吃
- 疗
- 细
- 虑
- 脑
- 韩
- 亮
- 旅
- 抓
- 罚
- 良
- 背
- 脸
- 绝
- 班
- 危
- 础
- 戏
- 戴
- 招
- 命
- 尚
- 缺
- 伙
- 须
- 父
- 夜
- 切
- 操
- 挥
- 派
- 延
- 撞
- 披
- 衣
- 剧
- 陆
- 竟
- 签
- 欧
- 享
- 春
- 徽
- 裁
- 偿
- 启
- 艺
- 宗
- 味
- 察
- 估
- 净
- 募
- 拥
- 释
- 喜
- 顺
- 励
- 靠
- 渐
- 兰
- 油
- 佳
- 困
- 针
- 迷
- 写
- 材
- 硬
- 桥
- 坚
- 订
- 拳
- 累
- 盖
- 室
- 束
- 截
- 距
- 驶
- 旬
- 歌
- 悉
- 烈
- 序
- 患
- 干
- 污
- 圈
- 杰
- 顶
- 败
- 伴
- 归
- 探
- 曝
- 怀
- 急
- 池
- 织
- 秀
- 姐
- 峰
- 顾
- 误
- 键
- 丰
- 玩
- 汉
- 古
- 彩
- 讨
- 朋
- 抗
- 刺
- 挑
- 血
- 凌
- 旧
- 拟
- 晒
- 附
- 惊
- 欢
- 劳
- 丈
- 播
- 徐
- 吗
- 湖
- 笑
- 馆
- 音
- 阵
- 坐
- 谷
- 异
- 怎
- 夏
- 龄
- 熟
- 若
- 惠
- 休
- 永
- 哪
- 暂
- 输
- 绍
- 印
- 冰
- 缓
- 暖
- 听
- 避
- 嘉
- 寻
- 培
- 筹
- 伦
- 雪
- 账
- 暴
- 简
- 予
- 丽
- 泽
- 刻
- 野
- 威
- 宽
- 笔
- 语
- 武
- 炒
- 虚
- 架
- 奇
- 哥
- 尤
- 座
- 迅
- 粉
- 倍
- 朱
- 屋
- 般
- 错
- 津
- 弟
- 汇
- 概
- 鼓
- 掉
- 郑
- 钟
- 召
- 礼
- 禁
- 折
- 缩
- 锁
- 涛
- 乡
- 肥
- 幸
- 雨
- 梦
- 肉
- 攻
- 冬
- 呼
- 蓝
- 综
- 码
- 杯
- 映
- 刀
- 谢
- 编
- 脚
- 晓
- 遍
- 朝
- 吉
- 洗
- 盗
- 丹
- 屏
- 盛
- 秘
- 拘
- 染
- 渠
- 扣
- 洋
- 梯
- 枪
- 久
- 诈
- 川
- 摩
- 俄
- 迪
- 毛
- 赞
- 符
- 画
- 翻
- 妹
- 筑
- 聚
- 哈
- 兵
- 肯
- 胎
- 潮
- 苦
- 逃
- 讲
- 授
- 慢
- 顿
- 遗
- 丝
- 呈
- 揭
- 挂
- 封
- 慧
- 跨
- 询
- 拆
- 森
- 孕
- 脱
- 读
- 枚
- 捐
- 桩
- 跃
- 刷
- 芯
- 斗
- 昆
- 储
- 守
- 触
- 木
- 皮
- 饭
- 添
- 莞
- 震
- 载
- 贵
- 侵
- 撑
- 爸
- 册
- 舞
- 丁
- 贸
- 奶
- 隐
- 妇
- 榜
- 睡
- 陷
- 草
- 扬
- 袭
- 偷
- 督
- 亏
- 吕
- 珠
- 赶
- 扶
- 盈
- 档
- 诺
- 返
- 既
- 末
- 沙
- 谁
- 宏
- 摘
- 典
- 床
- 闭
- 弃
- 雷
- 毕
- 郭
- 玲
- 郎
- 芝
- 胡
- 瑞
- 盟
- 厅
- 抱
- 燃
- 铜
- 旗
- 荣
- 餐
- 牙
- 爷
- 迹
- 宇
- 途
- 潜
- 抵
- 骨
- 援
- 浪
- 玉
- 祖
- 振
- 虹
- 散
- 焦
- 勇
- 努
- 婆
- 拒
- 弹
- 梁
- 坛
- 含
- 坏
- 纯
- 烟
- 冷
- 镜
- 叫
- 赵
- 静
- 仪
- 藏
- 杂
- 痛
- 慎
- 树
- 章
- 塞
- 钢
- 狂
- 呢
- 雅
- 寿
- 恩
- 固
- 狗
- 菜
- 沟
- 献
- 叶
- 泰
- 赢
- 剩
- 窃
- 偏
- 掌
- 宜
- 课
- 趣
- 喝
- 纠
- 籍
- 替
- 炸
- 隔
- 砸
- 搭
- 诚
- 族
- 浙
- 齐
- 杆
- 晋
- 恶
- 奋
- 秋
- 鲜
- 鲁
- 冒
- 赚
- 弱
- 腿
- 祝
- 混
- 缴
- 疾
- 握
- 汪
- 辉
- 奔
- 醒
- 捕
- 骑
- 鸟
- 摆
- 灵
- 敏
- 牛
- 岛
- 恋
- 耗
- 瓦
- 拼
- 恐
- 棒
- 坦
- 厚
- 侧
- 尝
- 薪
- 堂
- 曲
- 答
- 雄
- 徒
- 碍
- 拓
- 翔
- 佛
- 佐
- 滴
- 杭
- 残
- 毫
- 射
- 拖
- 阻
- 辑
- 踪
- 症
- 姓
- 欲
- 鱼
- 船
- 恢
- 衡
- 淡
- 唯
- 乏
- 迟
- 琪
- 烧
- 唐
- 卷
- 陪
- 伏
- 劵
- 繁
- 逆
- 迁
- 诊
- 乱
- 亦
- 谓
- 矿
- 迫
- 忧
- 扮
- 巢
- 扎
- 卓
- 恒
- 庄
- 递
- 灾
- 莱
- 赴
- 煤
- 搏
- 剂
- 梅
- 吧
- 撤
- 哲
- 炳
- 尾
- 誉
- 洛
- 轨
- 署
- 党
- 惯
- 幼
- 缘
- 墨
- 莫
- 辞
- 奏
- 敢
- 垄
- 旁
- 蒙
- 箱
- 吨
- 泛
- 怕
- 闹
- 欠
- 劫
- 纸
- 岸
- 淘
- 赌
- 窗
- 洁
- 岗
- 娘
- 晶
- 劲
- 凭
- 斤
- 洪
- 液
- 槛
- 兼
- 摔
- 楚
- 昌
- 菲
- 萌
- 伍
- 沿
- 咨
- 饮
- 墙
- 沈
- 坡
- 寸
- 溢
- 仓
- 鉴
- 慈
- 柯
- 旦
- 殊
- 坠
- 诸
- 搞
- 伊
- 霸
- 绑
- 氧
- 墅
- 轿
- 蛋
- 忙
- 滨
- 井
- 逼
- 伯
- 癌
- 燕
- 赖
- 浦
- 漏
- 携
- 堪
- 阅
- 诗
- 贩
- 腐
- 倾
- 铺
- 旺
- 横
- 逊
- 允
- 窄
- 鸡
- 唱
- 贿
- 拨
- 砍
- 猛
- 碳
- 堵
- 邀
- 冕
- 栏
- 姆
- 耳
- 绕
- 览
- 聘
- 琳
- 霞
- 挖
- 庞
- 彻
- 颁
- 挺
- 沉
- 抄
- 宫
- 殴
- 垃
- 圾
- 尸
- 涵
- 娃
- 婷
- 牵
- 腾
- 卧
- 偶
- 扰
- 澳
- 迈
- 虎
- 贡
- 词
- 壁
- 宾
- 捷
- 忍
- 佩
- 喊
- 抽
- 植
- 炼
- 奸
- 吐
- 抛
- 祥
- 莉
- 泄
- 械
- 乒
- 辛
- 疯
- 凯
- 扫
- 灯
- 淀
- 毁
- 鬼
- 婴
- 淫
- 冻
- 篮
- 聊
- 帅
- 乔
- 沪
- 羽
- 舍
- 裂
- 忽
- 圆
- 拔
- 朗
- 宿
- 麻
- 眠
- 玮
- 塔
- 碰
- 怪
- 押
- 攀
- 驰
- 欣
- 踏
- 巩
- 废
- 艰
- 乳
- 句
- 侦
- 兄
- 荐
- 寓
- 厦
- 贝
- 纵
- 肖
- 杜
- 忘
- 丢
- 搬
- 曼
- 瓶
- 鹏
- 默
- 惨
- 泡
- 愈
- 敦
- 洞
- 劝
- 颖
- 酷
- 颜
- 巡
- 脏
- 仿
- 羊
- 挤
- 廉
- 麦
- 塌
- 君
- 敌
- 乌
- 俩
- 樊
- 邮
- 烯
- 详
- 舒
- 契
- 漫
- 胞
- 魔
- 宋
- 伐
- 谨
- 姿
- 姑
- 隆
- 纹
- 傅
- 茶
- 著
- 谋
- 敬
- 郁
- 驱
- 菌
- 悬
- 循
- 摊
- 闪
- 伪
- 鸿
- 娜
- 澎
- 湃
- 炉
- 暗
- 闯
- 绪
- 汰
- 稿
- 咬
- 卢
- 泉
- 涌
- 蕾
- 姻
- 熊
- 稀
- 摇
- 吊
- 桌
- 俊
- 哭
- 赠
- 逸
- 吓
- 赫
- 凡
- 俱
- 冯
- 巧
- 涯
- 啦
- 讼
- 恰
- 抚
- 肇
- 锋
- 凶
- 贯
- 悄
- 灭
- 冀
- 糕
- 伸
- 胖
- 腹
- 郊
- 斌
- 鑫
- 厉
- 肩
- 圣
- 浮
- 妙
- 饰
- 尖
- 尊
- 邱
- 诞
- 屡
- 摸
- 酬
- 闲
- 晰
- 匹
- 锻
- 甲
- 敲
- 遥
- 勒
- 兑
- 熙
- 稽
- 蔡
- 惜
- 猫
- 怒
- 驻
- 颇
- 浓
- 宴
- 仁
- 赏
- 磨
- 悲
- 骂
- 轴
- 姜
- 猪
- 割
- 歉
- 玻
- 浩
- 番
- 渡
- 肌
- 践
- 盾
- 甜
- 溺
- 尺
- 忆
- 盐
- 泥
- 薄
- 矛
- 畅
- 抑
- 颗
- 蒋
- 稍
- 碎
- 帝
- 璃
- 掀
- 拐
- 牢
- 幻
- 仔
- 粮
- 艾
- 扭
- 尿
- 刊
- 仑
- 黎
- 埃
- 臂
- 邻
- 苗
- 衔
- 桂
- 潭
- 履
- 贾
- 饼
- 惩
- 诱
- 旋
- 篇
- 辽
- 旭
- 逾
- 豆
- 潘
- 堆
- 甘
- 邦
- 氏
- 拦
- 硕
- 棋
- 裤
- 乓
- 姚
- 厘
- 邓
- 陶
- 萨
- 弗
- 辅
- 廷
- 吁
- 杠
- 绮
- 瑄
- 夹
- 槽
- 祸
- 袁
- 勾
- 赁
- 帖
- 腰
- 漂
- 裕
- 嘴
- 壮
- 弯
- 啊
- 汤
- 垫
- 魏
- 倡
- 栋
- 碑
- 颈
- 暑
- 魅
- 裸
- 疏
- 雇
- 毅
- 忠
- 疆
- 葛
- 凤
- 屈
- 悦
- 馈
- 挡
- 闫
- 氮
- 兆
- 貌
- 厕
- 谣
- 颠
- 猜
- 疲
- 框
- 揽
- 胁
- 憾
- 秩
- 艳
- 帽
- 氛
- 荷
- 泪
- 剑
- 懂
- 钻
- 遵
- 贪
- 贼
- 狱
- 姣
- 寺
- 胶
- 吵
- 催
- 削
- 丑
- 欺
- 肃
- 妥
- 烦
- 灰
- 擅
- 佣
- 萧
- 虾
- 鞋
- 捧
- 逝
- 猥
- 瓜
- 酸
- 奈
- 厨
- 紫
- 侠
- 塑
- 娇
- 辖
- 舆
- 擦
- 柏
- 澄
- 磊
- 虐
- 轰
- 曹
- 删
- 鼻
- 柳
- 屯
- 笼
- 皇
- 糖
- 珍
- 疼
- 柜
- 捡
- 址
- 肠
- 捞
- 拜
- 峻
- 吹
- 乃
- 瘦
- 肚
- 贤
- 帕
- 岳
- 勤
- 瑜
- 锅
- 沫
- 俗
- 昕
- 帆
- 茂
- 醉
- 填
- 饱
- 爬
- 轩
- 滞
- 蜜
- 汗
- 飙
- 耐
- 亨
- 媳
- 彭
- 蓄
- 蝶
- 炮
- 鼠
- 咖
- 琴
- 宠
- 棍
- 掘
- 茨
- 坑
- 湘
- 孟
- 劣
- 灿
- 虫
- 彦
- 喷
- 描
- 辩
- 尴
- 尬
- 弥
- 孤
- 峡
- 凸
- 逻
- 辰
- 孔
- 抬
- 馨
- 蔚
- 怡
- 雯
- 砖
- 崇
- 肢
- 柱
- 阔
- 彼
- 荒
- 滚
- 葡
- 萄
- 昂
- 盆
- 怨
- 瞬
- 斜
- 斩
- 睛
- 剪
- 插
- 棚
- 串
- 沃
- 柔
- 肤
- 壳
- 胸
- 陕
- 凉
- 崛
- 鸣
- 罕
- 衷
- 阴
- 盲
- 伞
- 戒
- 踢
- 狼
- 埋
- 酿
- 旨
- 戈
- 捉
- 跪
- 贺
- 谭
- 涂
- 萎
- 滋
- 昏
- 扇
- 鼎
- 楠
- 驳
- 溪
- 桑
- 钧
- 荡
- 痕
- 玛
- 躲
- 谐
- 您
- 叹
- 桶
- 晕
- 丙
- 璇
- 咚
- 烂
- 杉
- 挣
- 窝
- 亵
- 芸
- 渝
- 芳
- 妆
- 膜
- 煌
- 尘
- 侯
- 赋
- 渣
- 贫
- 桃
- 页
- 吞
- 胀
- 竹
- 肝
- 雾
- 嫁
- 辈
- 愤
- 琐
- 殖
- 媛
- 寄
- 僵
- 逮
- 聪
- 粗
- 寒
- 弄
- 墓
- 谌
- 扔
- 役
- 呆
- 靖
- 蒂
- 芬
- 翼
- 喂
- 孵
- 谎
- 硅
- 璨
- 喀
- 盼
- 盒
- 慌
- 烫
- 秦
- 梳
- 韦
- 袋
- 钓
- 夕
- 碗
- 寨
- 塘
- 衍
- 垒
- 卿
- 滩
- 扑
- 绘
- 辱
- 炎
- 铅
- 肿
- 衰
- 厢
- 躺
- 纽
- 硫
- 睐
- 翁
- 慰
- 耍
- 缠
- 狠
- 脉
- 斥
- 脂
- 趴
- 钩
- 歧
- 椅
- 踩
- 掷
- 挽
- 锐
- 勘
- 逢
- 郝
- 宪
- 胃
- 粒
- 瞩
- 辟
- 皆
- 仰
- 腕
- 匪
- 陵
- 钥
- 缝
- 闸
- 犬
- 锡
- 弊
- 凝
- 臭
- 趁
- 拾
- 夸
- 掩
- 耀
- 炭
- 铬
- 叠
- 坊
- 挪
- 蟹
- 裹
- 狮
- 辐
- 陌
- 捅
- 疫
- 兹
- 霍
- 锈
- 娟
- 蚁
- 奢
- 吻
- 侃
- 晖
- 扳
- 冤
- 彰
- 蹈
- 畴
- 蛇
- 濠
- 啡
- 堡
- 侣
- 撒
- 铭
- 掏
- 奎
- 蜂
- 咸
- 穷
- 瞄
- 遂
- 碾
- 匿
- 瓷
- 舱
- 刹
- 柄
- 倪
- 睹
- 译
- 淇
- 猝
- 浅
- 肺
- 湿
- 顽
- 罩
- 胆
- 匙
- 渴
- 妮
- 羞
- 脆
- 魄
- 锂
- 纤
- 炫
- 裙
- 肾
- 傲
- 膝
- 叔
- 啥
- 撕
- 牲
- 猴
- 辨
- 酝
- 刮
- 惑
- 渗
- 喻
- 晴
- 淑
- 羡
- 慕
- 擂
- 骚
- 纺
- 咕
- 僧
- 悔
- 垂
- 瘫
- 剥
- 舰
- 浏
- 鲍
- 跻
- 亭
- 撰
- 卸
- 莲
- 纱
- 糊
- 朵
- 岩
- 眉
- 函
- 糟
- 仗
- 惹
- 琦
- 贞
- 氢
- 楷
- 莓
- 瞒
- 奠
- 勃
- 锤
- 妨
- 帷
- 洽
- 乞
- 牺
- 亩
- 簿
- 斑
- 翘
- 祈
- 唇
- 耕
- 扯
- 妍
- 坎
- 谱
- 盯
- 泼
- 悍
- 莎
- 汁
- 囊
- 甩
- 辣
- 浸
- 恼
- 盔
- 烤
- 坝
- 巅
- 沸
- 抹
- 邹
- 霾
- 怖
- 犹
- 擎
- 迄
- 恨
- 丧
- 坞
- 袖
- 赤
- 萍
- 爽
- 穆
- 娶
- 闷
- 捍
- 膀
- 侈
- 筋
- 逛
- 倩
- 纲
- 遮
- 御
- 姨
- 淮
- 宰
- 叉
- 绵
- 惧
- 钦
- 廊
- 鳄
- 砂
- 浆
- 禽
- 咏
- 瘾
- 饿
- 痴
- 绳
- 碟
- 韵
- 皓
- 廖
- 岭
- 蛙
- 兔
- 芽
- 剖
- 嫖
- 昔
- 哀
- 蔓
- 谦
- 滥
- 赂
- 渊
- 捣
- 佑
- 弈
- 仙
- 澡
- 骤
- 侨
- 奉
- 磅
- 慨
- 筛
- 嘲
- 竣
- 箭
- 荧
- 脖
- 彤
- 豫
- 躁
- 秉
- 鹤
- 幺
- 渔
- 罢
- 贬
- 铲
- 卵
- 逗
- 牧
- 蔬
- 苑
- 沦
- 遏
- 柴
- 庙
- 兽
- 耶
- 魂
- 溜
- 缉
- 俏
- 蕴
- 苛
- 凑
- 婿
- 铸
- 兜
- 蹭
- 鸭
- 朴
- 肋
- 噪
- 焚
- 坍
- 啤
- 钉
- 戚
- 谍
- 挫
- 艇
- 余
- 巷
- 屠
- 咋
- 詹
- 衫
- 浴
- 爹
- 孝
- 瘤
- 霖
- 崩
- 甸
- 悼
- 擒
- 浇
- 雕
- 竖
- 帐
- 萤
- 靡
- 漠
- 傻
- 撼
- 崔
- 筒
- 脊
- 嘛
- 臣
- 禾
- 龟
- 唤
- 呀
- 壤
- 灌
- 邵
- 稻
- 巾
- 葩
- 饥
- 缔
- 舌
- 窜
- 秽
- 茅
- 靓
- 阱
- 钞
- 潼
- 硝
- 墩
- 蝙
- 蝠
- 嫂
- 艘
- 嚣
- 铃
- 扒
- 佬
- 竭
- 赎
- 傍
- 熬
- 悠
- 挨
- 泊
- 攒
- 坪
- 焰
- 螺
- 薇
- 蛛
- 牟
- 忌
- 愧
- 酵
- 迭
- 饶
- 惟
- 钮
- 闵
- 碧
- 徘
- 徊
- 溯
- 棉
- 歪
- 捂
- 蚊
- 锰
- 屁
- 畸
- 肪
- 蹲
- 剔
- 榆
- 撇
- 瑟
- 讶
- 飘
- 蒸
- 诠
- 寂
- 罄
- 莹
- 鹅
- 泣
- 崖
- 珊
- 讳
- 翰
- 蜘
- 仲
- 燥
- 菱
- 滢
- 煎
- 蛮
- 瞻
- 蘑
- 菇
- 隙
- 捆
- 蕉
- 遣
- 宛
- 肆
- 丸
- 磁
- 玥
- 嵌
- 韶
- 枝
- 咪
- 愉
- 呕
- 淤
- 誓
- 辄
- 俯
- 桐
- 舅
- 蓉
- 渭
- 氯
- 溅
- 雁
- 龚
- 恺
- 妖
- 饽
- 荆
- 枯
- 仇
- 坟
- 澜
- 麟
- 藤
- 猎
- 洒
- 茹
- 碌
- 畏
- 涤
- 俞
- 勿
- 蔽
- 罐
- 尹
- 堰
- 儒
- 芮
- 孚
- 哗
- 掐
- 矶
- 椎
- 阐
- 驴
- 蝉
- 焕
- 鄂
- 耻
- 炯
- 衬
- 婉
- 愁
- 梨
- 丛
- 谅
- 膨
- 曙
- 鹿
- 骄
- 缅
- 匆
- 赃
- 蒲
- 睁
- 焱
- 灼
- 刃
- 螃
- 瑕
- 讹
- 禅
- 臀
- 姗
- 媚
- 呛
- 凰
- 瀚
- 埔
- 弓
- 阚
- 湛
- 奕
- 扛
- 齿
- 挟
- 髓
- 狭
- 栈
- 骏
- 崭
- 慑
- 殿
- 祭
- 僻
- 蹬
- 寡
- 呦
- 鞠
- 酱
- 瑰
- 馒
- 坤
- 趟
- 臻
- 咒
- 豹
- 畜
- 冉
- 绎
- 岌
- 甄
- 绞
- 宵
- 庸
- 歇
- 挠
- 氨
- 乙
- 茵
- 岔
- 淄
- 碘
- 淋
- 蓬
- 颅
- 羹
- 浑
- 昧
- 翠
- 峥
- 惕
- 睿
- 芦
- 蚀
- 颓
- 霜
- 钰
- 橘
- 堤
- 凳
- 溶
- 锯
- 幂
- 榴
- 娼
- 汹
- 茫
- 厌
- 绰
- 崎
- 溃
- 撬
- 沾
- 拇
- 疵
- 哦
- 弧
- 弘
- 咽
- 葬
- 阁
- 竿
- 篡
- 隶
- 诟
- 煮
- 丘
- 耿
- 彬
- 敞
- 泻
- 夷
- 隅
- 渎
- 淹
- 骆
- 醋
- 霆
- 涩
- 陀
- 叙
- 梗
- 冶
- 敛
- 痪
- 讽
- 疤
- 螂
- 芒
- 幢
- 炜
- 毯
- 橙
- 拢
- 俨
- 仕
- 氰
- 钾
- 呐
- 株
- 脾
- 烨
- 磕
- 薛
- 窖
- 芷
- 蜕
- 衅
- 歹
- 哒
- 诡
- 摧
- 漆
- 蟑
- 劈
- 呵
- 絮
- 抖
- 娅
- 铝
- 霉
- 芭
- 辜
- 昊
- 嘘
- 哑
- 枢
- 脐
- 庐
- 钠
- 鳌
- 矩
- 锆
- 婧
- 沛
- 饲
- 熄
- 翡
- 屹
- 膏
- 阙
- 搂
- 锣
- 幌
- 橄
- 榄
- 杖
- 旷
- 矫
- 冈
- 舟
- 腊
- 聂
- 拣
- 遛
- 勋
- 窘
- 韧
- 咱
- 拎
- 椒
- 揣
- 殷
- 揪
- 伽
- 贱
- 琼
- 菡
- 闺
- 昭
- 雏
- 蹊
- 黛
- 禹
- 鞍
- 乖
- 汝
- 甫
- 彝
- 泸
- 诬
- 拽
- 毽
- 搅
- 葵
- 旱
- 勉
- 跷
- 畔
- 肘
- 坂
- 漩
- 涡
- 倘
- 醛
- 曦
- 铀
- 杏
- 棕
- 幽
- 裴
- 阮
- 敷
- 茄
- 沧
- 剽
- 恳
- 淳
- 萱
- 袱
- 亥
- 痱
- 腔
- 嫉
- 粹
- 焊
- 诀
- 粪
- 朔
- 黯
- 谜
- 眨
- 祁
- 暧
- 魁
- 辗
- 穗
- 倦
- 剿
- 袍
- 恭
- 炙
- 娴
- 玫
- 锏
- 熏
- 窥
- 堕
- 悟
- 晃
- 缪
- 驿
- 泷
- 雀
- 惫
- 玺
- 剃
- 斐
- 袂
- 梭
- 哄
- 邪
- 岂
- 腻
- 嫩
- 榕
- 谴
- 潇
- 纬
- 侮
- 翅
- 镶
- 坷
- 彪
- 祷
- 匝
- 耽
- 萝
- 窑
- 瑾
- 滤
- 拱
- 哨
- 蠢
- 邢
- 涞
- 恤
- 泾
- 谤
- 瀑
- 舶
- 懈
- 忱
- 烹
- 晟
- 踞
- 剁
- 珉
- 庚
- 晤
- 壶
- 砾
- 嗅
- 妒
- 匈
- 胰
- 绯
- 荼
- 爪
- 茜
- 桦
- 蜇
- 芜
- 玄
- 葫
- 蚂
- 绊
- 搁
- 霏
- 粘
- 佟
- 雍
- 垮
- 羁
- 娥
- 碱
- 磷
- 钊
- 毙
- 诿
- 绸
- 捏
- 遴
- 畊
- 厮
- 巫
- 猖
- 獗
- 掴
- 辍
- 蜡
- 赣
- 筵
- 芙
- 蒜
- 缆
- 俪
- 鹰
- 笋
- 毋
- 喆
- 鹭
- 蝴
- 汀
- 诽
- 桔
- 篷
- 莽
- 栖
- 饪
- 伺
- 戳
- 谊
- 霄
- 侄
- 滔
- 瞎
- 皱
- 蛟
- 裔
- 烽
- 猿
- 叮
- 绷
- 腺
- 暨
- 沥
- 喧
- 囤
- 掠
- 陡
- 膺
- 痒
- 饵
- 戎
- 褚
- 丐
- 渤
- 帜
- 娄
- 洼
- 禄
- 婵
- 琢
- 躯
- 禺
- 峙
- 踹
- 怜
- 炖
- 剐
- 缚
- 襄
- 枫
- 绽
- 庾
- 斧
- 穴
- 寇
- 蝇
- 鞭
- 阎
- 矢
- 糙
- 巍
- 蒿
- 殒
- 蛰
- 囧
- 卜
- 宙
- 珮
- 鸦
- 璞
- 翟
- 酗
- 褒
- 豁
- 镑
- 耷
- 棠
- 垦
- 韬
- 荫
- 窨
- 鸽
- 羲
- 懒
- 躬
- 匕
- 犀
- 吼
- 珀
- 昙
- 樱
- 蹿
- 抉
- 苍
- 汛
- 铉
- 镉
- 喔
- 邯
- 郸
- 噱
- 瓯
- 沼
- 捻
- 苯
- 蹼
- 麋
- 阀
- 煞
- 踝
- 缭
- 菊
- 竺
- 峭
- 攥
- 癖
- 肛
- 泔
- 拯
- 窟
- 靳
- 舵
- 嘱
- 昱
- 勺
- 吾
- 丫
- 觅
- 醇
- 磋
- 徙
- 陨
- 惺
- 渍
- 炬
- 栽
- 晏
- 颂
- 奴
- 榔
- 驭
- 嚼
- 赡
- 豚
- 蔷
- 梓
- 梧
- 哽
- 晗
- 汞
- 嫣
- 蕊
- 祺
- 疹
- 壹
- 噬
- 皂
- 矗
- 悚
- 憧
- 憬
- 拷
- 扁
- 廓
- 蹴
- 岚
- 瑛
- 崴
- 栗
- 囚
- 涿
- 礁
- 晔
- 殡
- 璀
- 淞
- 隋
- 踵
- 钵
- 煊
- 赘
- 瞧
- 寞
- 陋
- 骷
- 髅
- 秸
- 秆
- 夯
- 荔
- 襁
- 褓
- 笨
- 沮
- 瞅
- 怂
- 茗
- 甥
- 亟
- 杳
- 煦
- 挚
- 棵
- 祠
- 嗯
- 枕
- 粟
- 泌
- 蜀
- 寥
- 遐
- 涝
- 辫
- 籁
- 窍
- 聋
- 逍
- 跤
- 凹
- 釜
- 嘀
- 嗒
- 淝
- 藜
- 翱
- 硚
- 叼
- 痹
- 腼
- 腆
- 伎
- 骋
- 愕
- 腥
- 拮
- 轧
- 癫
- 橡
- 膊
- 觑
- 寅
- 砒
- 趾
- 颐
- 漳
- 峨
- 呜
- 淆
- 凿
- 壕
- 铨
- 莆
- 筷
- 璧
- 譬
- 岖
- 抠
- 笛
- 厥
- 砺
- 喉
- 酌
- 簧
- 鲸
- 踊
- 牡
- 嬛
- 缜
- 奂
- 熹
- 闽
- 馊
- 胯
- 喇
- 伶
- 墟
- 煜
- 耘
- 榷
- 骁
- 猩
- 辙
- 狸
- 滕
- 诵
- 窒
- 恍
- 髦
- 诫
- 榨
- 熠
- 蔺
- 薯
- 歆
- 粤
- 夭
- 拌
- 唏
- 厄
- 吝
- 眷
- 峪
- 拙
- 咎
- 粥
- 痰
- 琅
- 羚
- 莘
- 憨
- 瞰
- 炅
- 孜
- 亢
- 缮
- 焯
- 咄
- 暇
- 矮
- 汲
- 灶
- 闰
- 奚
- 汶
- 珲
- 麓
- 憋
- 崂
- 镳
- 殃
- 卉
- 诧
- 矣
- 屎
- 聆
- 芋
- 屑
- 罂
- 籽
- 绚
- 卞
- 枉
- 汕
- 懋
- 媲
- 啧
- 掣
- 嬉
- 仨
- 姬
- 懿
- 馅
- 胺
- 撂
- 睫
- 蛐
- 萃
- 眈
- 飚
- 毓
- 涅
- 昼
- 橱
- 驼
- 涠
- 谩
- 婶
- 膛
- 拄
- 绣
- 栅
- 邬
- 怠
- 鄙
- 哉
- 跺
- 帘
- 沓
- 搀
- 腌
- 羿
- 泵
- 鄞
- 郡
- 烃
- 愚
- 蕙
- 垤
- 锌
- 柠
- 檬
- 葱
- 垢
- 匮
- 卦
- 懊
- 掺
- 叱
- 坯
- 糯
- 覆
- 铆
- 琬
- 抡
- 潢
- 棺
- 塾
- 飓
- 诅
- 翩
- 揍
- 檀
- 鳝
- 讪
- 熔
- 杞
- 啃
- 昀
- 紊
- 敖
- 璐
- 蔗
- 槌
- 铐
- 搡
- 磐
- 宕
- 栓
- 叭
- 戟
- 顷
- 濒
- 窦
- 摁
- 俐
- 瞳
- 蚕
- 鹊
- 迂
- 畿
- 瓣
- 媞
- 寝
- 蹦
- 嗑
- 袒
- 殉
- 稚
- 俘
- 搪
- 沽
- 妃
- 嗓
- 胫
- 町
- 莴
- 苣
- 痘
- 蔑
- 皖
- 枞
- 忐
- 忑
- 靴
- 菁
- 姥
- 诙
- 嚷
- 焉
- 沣
- 霹
- 雳
- 僚
- 尧
- 嘎
- 诩
- 咫
- 柬
- 惮
- 狄
- 匀
- 裆
- 黏
- 釉
- 膳
- 渺
- 苟
- 瑶
- 唾
- 瘠
- 讧
- 睦
- 弦
- 庇
- 袄
- 噩
- 扼
- 戛
- 禀
- 恿
- 滁
- 麾
- 筱
- 瘀
- 褪
- 槟
- 缨
- 绒
- 犷
- 茸
- 惋
- 嗤
- 寮
- 褂
- 咳
- 缀
- 谙
- 涧
- 炽
- 缄
- 鹜
- 砌
- 贮
- 庵
- 隧
- 卤
- 跆
- 皋
- 蝗
- 洱
- 圪
- 邑
- 锄
- 荟
- 渚
- 苇
- 孰
- 鹃
- 哼
- 呃
- 琛
- 痣
- 摹
- 痼
- 镯
- 刁
- 秧
- 腩
- 鳞
- 乍
- 颚
- 慷
- 氓
- 惦
- 卑
- 挝
- 熨
- 濮
- 胳
- 瓢
- 砰
- 溧
- 锷
- 鸠
- 犒
- 姝
- 蹄
- 宸
- 侥
- 锭
- 佶
- 浊
- 婪
- 磺
- 咤
- 迢
- 檐
- 邺
- 掂
- 渲
- 嚎
- 祛
- 伢
- 叛
- 撮
- 甬
- 淌
- 瀛
- 朽
- 陂
- 帼
- 铿
- 锵
- 漓
- 驯
- 鲨
- 抒
- 茁
- 柿
- 貔
- 貅
- 钝
- 鳅
- 嚏
- 暮
- 瑚
- 荤
- 蜓
- 垣
- 颤
- 溥
- 臃
- 戮
- 枣
- 佼
- 拗
- 哆
- 嗦
- 惚
- 鸥
- 倚
- 嗨
- 舸
- 赐
- 姊
- 憔
- 悴
- 铰
- 黝
- 屿
- 秃
- 嘻
- 楞
- 棱
- 袈
- 裟
- 汴
- 揉
- 髋
- 悸
- 榻
- 逞
- 晾
- 屌
- 闳
- 痊
- 袜
- 扉
- 琶
- 摒
- 捺
- 匠
- 窈
- 窕
- 飒
- 猬
- 蜚
- 萋
- 蚯
- 蚓
- 鲟
- 澈
- 樟
- 悖
- 玖
- 俾
- 抿
- 彷
- 彿
- 虱
- 狙
- 鲶
- 槿
- 烘
- 挎
- 狰
- 狞
- 邃
- 瞪
- 俚
- 涕
- 谬
- 睬
- 蜷
- 兢
- 镍
- 砷
- 菠
- 怦
- 凄
- 卯
- 獒
- 渀
- 辘
- 滇
- 燎
- 噎
- 蝎
- 綦
- 鄢
- 捎
- 瞿
- 蜿
- 蜒
- 禧
- 榈
- 锹
- 殭
- 爵
- 盹
- 淖
- 啼
- 瓮
- 鳖
- 镖
- 珑
- 罹
- 殆
- 掖
- 柞
- 缸
- 绅
- 棘
- 祉
- 胱
- 殓
- 嗡
- 嗷
- 箍
- 圩
- 耒
- 婕
- 腑
- 萦
- 鹞
- 珜
- 啵
- 瑙
- 葆
- 逡
- 嗽
- 饕
- 餮
- 隼
- 妞
- 饺
- 叨
- 酋
- 恙
- 泗
- 弩
- 骜
- 铎
- 酶
- 蚝
- 烁
- 匾
- 侬
- 藻
- 馥
- 骥
- 槐
- 缕
- 椿
- 袆
- 琊
- 稣
- 藩
- 迸
- 蹂
- 躏
- 隽
- 俸
- 郫
- 簸
- 砥
- 骸
- 掮
- 斛
- 啸
- 璋
- 垛
- 札
- 邋
- 遢
- 蕲
- 哇
- 碴
- 邛
- 崃
- 觐
- 笙
- 裳
- 泞
- 蚌
- 醍
- 醐
- 拴
- 舜
- 沅
- 懵
- 谕
- 帚
- 螳
- 噼
- 啪
- 漱
- 郜
- 碉
- 圭
- 谀
- 轶
- 舀
- 呲
- 啶
- 氟
- 琏
- 垅
- 娩
- 乾
- 鏖
- 牾
- 肮
- 啕
- 吏
- 涓
- 氦
- 锥
- 桎
- 吿
- 烊
- 斟
- 汾
- 岐
- 耄
- 耋
- 嗲
- 胛
- 疚
- 骇
- 癣
- 磡
- 侑
- 漾
- 碚
- 琉
- 惬
- 遁
- 耸
- 岱
- 糗
- 缙
- 肴
- 梵
- 僮
- 鸵
- 悯
- 孪
- 莅
- 戬
- 霁
- 簇
- 逵
- 倜
- 傥
- 馋
- 蓁
- 衙
- 蛀
- 蔫
- 崧
- 吟
- 琰
- 唬
- 渥
- 岷
- 仡
- 涎
- 鸳
- 鸯
- 镊
- 妧
- 嬷
- 嫦
- 嫔
- 沐
- 伉
- 嶝
- 锢
- 筐
- 蜥
- 蜴
- 泱
- 骅
- 吆
- 撩
- 怯
- 叩
- 哟
- 啬
- 岬
- 笃
- 玳
- 瑁
- 邝
- 咣
- 矜
- 嘭
- 馗
- 婀
- 黔
- 锟
- 啰
- 翌
- 铠
- 貉
- 獾
- 酣
- 楣
- 佃
- 琵
- 茆
- 皙
- 凋
- 敝
- 匣
- 嵘
- 宓
- 茎
- 楂
- 竲
- 瘪
- 侗
- 铣
- 薰
- 砲
- 羣
- 淼
- 襟
- 妊
- 娠
- 罡
- 瘁
- 椰
- 烙
- 呗
- 荃
- 皎
- 殚
- 腋
- 骼
- 腓
- 榭
- 隘
- 唉
- 铮
- 狩
- 抨
- 峁
- 粱
- 阂
- 厩
- 莠
- 吩
- 咐
- 瞌
- 蜊
- 恬
- 膑
- 踉
- 跄
- 颍
- 朐
- 疝
- 毂
- 秣
- 舛
- 炊
- 漯
- 泠
- 喘
- 撵
- 狡
- 猾
- 铂
- 钛
- 荞
- 拭
- 丞
- 漭
- 绌
- 埜
- 掰
- 狈
- 锜
- 菩
- 弛
- 寰
- 秤
- 灞
- 黍
- 蓟
- 嵛
- 榉
- 幄
- 颊
- 缤
- 朦
- 胧
- 冥
- 砝
- 镀
- 夙
- 燊
- 荚
- 浈
- 苡
- 眺
- 陬
- 寐
- 佘
- 濑
- 仄
- 楔
- 胚
- 嵩
- 洙
- 诓
- 阜
- 浚
- 觊
- 觎
- 曰
- 怵
- 兖
- 稠
- 嵋
- 艋
- 篪
- 琥
- 玟
- 褴
- 褛
- 喱
- 虞
- 魇
- 凇
- 徉
- 嘟
- 臆
- 犊
- 哎
- 靑
- 俺
- 塬
- 妯
- 娌
- 蜈
- 蚣
- 恣
- 沏
- 磴
- 霎
- 趸
- 麒
- 氪
- 缇
- 沁
- 疃
- 恸
- 瘩
- 暄
- 憩
- 祯
- 惰
- 溉
- 沱
- 诲
- 笈
- 擘
- 亳
- 孺
- 忪
- 瞟
- 擞
- 瘸
- 掬
- 唁
- 蹚
- 匡
- 粕
- 鲷
- 泓
- 叵
- 嗣
- 眯
- 炷
- 珺
- 漕
- 谑
- 咯
- 嗬
- 缰
- 卲
- 壑
- 靶
- 隍
- 唠
- 濡
- 盎
- 骊
- 腱
- 鞘
- 拧
- 痫
- 宦
- 诶
- 椋
- 鼾
- 湍
- 毗
- 酪
- 赦
- 炕
- 焘
- 奘
- 邂
- 逅
- 妄
- 骐
- 卒
- 喵
- 觥
- 眬
- 纣
- 憷
- 覃
- 孀
- 芊
- 孢
- 惶
- 迥
- 纰
- 咀
- 鸾
- 箫
- 晦
- 泯
- 砚
- 吭
- 祢
- 揩
- 刨
- 珏
- 撸
- 兀
- 痉
- 挛
- 胤
- 巿
- 纶
- 镁
- 哺
- 咔
- 嚓
- 稼
- 焖
- 妤
- 妩
- 潞
- 雌
- 栾
- 侍
- 煲
- 嫚
- 竽
- 恪
- 霈
- 赝
- 莺
- 眶
- 桓
- 槎
- 馑
- 涮
- 枭
- 徇
- 洵
- 垌
- 昵
- 褶
- 喽
- 脯
- 孱
- 遨
- 谚
- 烷
- 搽
- 酯
- 枷
- 桉
- 咧
- 窿
- 拈
- 斓
- 跛
- 蹶
- 瘟
- 俭
- 靛
- 脍
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: branchformer
encoder_conf:
output_size: 256
use_attn: true
attention_heads: 4
attention_layer_type: fast_selfattn
pos_enc_layer_type: abs_pos
rel_pos_type: latest
use_cgmlp: true
cgmlp_linear_units: 2048
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
merge_method: concat
cgmlp_weight: 0.5
attn_branch_drop_rate: 0.0
num_blocks: 24
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
stochastic_depth_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: '202204'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
trtd56/ppo-Walker2DBulletEnv-v0 | trtd56 | 2022-07-16T00:39:31Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-16T00:38:25Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 2426.70 +/- 17.01
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **PPO** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **PPO** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Evelyn18/distilbert-base-uncased-modelo-becas0 | Evelyn18 | 2022-07-15T22:56:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:becasv3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-15T22:03:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- becasv3
model-index:
- name: distilbert-base-uncased-modelo-becas0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-modelo-becas0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the becasv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.5381 |
| No log | 2.0 | 10 | 4.9493 |
| No log | 3.0 | 15 | 4.4985 |
| No log | 4.0 | 20 | 4.1063 |
| No log | 5.0 | 25 | 3.7708 |
| No log | 6.0 | 30 | 3.5205 |
| No log | 7.0 | 35 | 3.3313 |
| No log | 8.0 | 40 | 3.2195 |
| No log | 9.0 | 45 | 3.1453 |
| No log | 10.0 | 50 | 3.1182 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner-full | kinanmartin | 2022-07-15T21:22:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:toydata",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-15T16:40:32Z | ---
tags:
- generated_from_trainer
datasets:
- toydata
model-index:
- name: xlm-roberta-large-ner-hrl-finetuned-ner-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-ner-hrl-finetuned-ner-full
This model is a fine-tuned version of [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl) on the toydata dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
darragh/swinunetr-btcv-base | darragh | 2022-07-15T21:01:42Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"btcv",
"medical",
"swin",
"en",
"dataset:BTCV",
"arxiv:2201.01266",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-07-15T11:31:09Z | ---
language: en
tags:
- btcv
- medical
- swin
license: apache-2.0
datasets:
- BTCV
---
# Model Overview
This repository contains the code for Swin UNETR [1,2]. Swin UNETR is the state-of-the-art on Medical Segmentation
Decathlon (MSD) and Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset. In [1], a novel methodology is devised for pre-training Swin UNETR backbone in a self-supervised
manner. We provide the option for training Swin UNETR by fine-tuning from pre-trained self-supervised weights or from scratch.
The source repository for the training of these models can be found [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV).
# Installing Dependencies
Dependencies for training and inference can be installed using the model requirements :
``` bash
pip install -r requirements.txt
```
# Intended uses & limitations
You can use the raw model for dicom segmentation, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks which segment CAT scans or MRIs on images in dicom format. Dicom meta data mostly differs across medical facilities, so if applying to a new dataset, the model should be finetuned.
# How to use
To install necessary dependencies, run the below in bash.
```
git clone https://github.com/darraghdog/Project-MONAI-research-contributions pmrc
pip install -r pmrc/requirements.txt
cd pmrc/SwinUNETR/BTCV
```
To load the model from the hub.
```
>>> from swinunetr import SwinUnetrModelForInference
>>> model = SwinUnetrModelForInference.from_pretrained('darragh/swinunetr-btcv-tiny')
```
# Limitations and bias
The training data used for this model is specific to CAT scans from certain health facilities and machines. Data from other facilities may difffer in image distributions, and may require finetuning of the models for best performance.
# Evaluation results
We provide several pre-trained models on BTCV dataset in the following.
<table>
<tr>
<th>Name</th>
<th>Dice (overlap=0.7)</th>
<th>Dice (overlap=0.5)</th>
<th>Feature Size</th>
<th># params (M)</th>
<th>Self-Supervised Pre-trained </th>
</tr>
<tr>
<td>Swin UNETR/Base</td>
<td>82.25</td>
<td>81.86</td>
<td>48</td>
<td>62.1</td>
<td>Yes</td>
</tr>
<tr>
<td>Swin UNETR/Small</td>
<td>79.79</td>
<td>79.34</td>
<td>24</td>
<td>15.7</td>
<td>No</td>
</tr>
<tr>
<td>Swin UNETR/Tiny</td>
<td>72.05</td>
<td>70.35</td>
<td>12</td>
<td>4.0</td>
<td>No</td>
</tr>
</table>
# Data Preparation

The training data is from the [BTCV challenge dataset](https://www.synapse.org/#!Synapse:syn3193805/wiki/217752).
- Target: 13 abdominal organs including 1. Spleen 2. Right Kidney 3. Left Kideny 4.Gallbladder 5.Esophagus 6. Liver 7. Stomach 8.Aorta 9. IVC 10. Portal and Splenic Veins 11. Pancreas 12.Right adrenal gland 13.Left adrenal gland.
- Task: Segmentation
- Modality: CT
- Size: 30 3D volumes (24 Training + 6 Testing)
# Training
See the source repository [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV) for information on training.
# BibTeX entry and citation info
If you find this repository useful, please consider citing the following papers:
```
@inproceedings{tang2022self,
title={Self-supervised pre-training of swin transformers for 3d medical image analysis},
author={Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20730--20740},
year={2022}
}
@article{hatamizadeh2022swin,
title={Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images},
author={Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger and Xu, Daguang},
journal={arXiv preprint arXiv:2201.01266},
year={2022}
}
```
# References
[1]: Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V. and Hatamizadeh, A., 2022. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20730-20740).
[2]: Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.
|
darragh/swinunetr-btcv-tiny | darragh | 2022-07-15T21:01:18Z | 40 | 1 | transformers | [
"transformers",
"pytorch",
"btcv",
"medical",
"swin",
"en",
"dataset:BTCV",
"arxiv:2201.01266",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-07-14T13:37:25Z | ---
language: en
tags:
- btcv
- medical
- swin
license: apache-2.0
datasets:
- BTCV
---
# Model Overview
This repository contains the code for Swin UNETR [1,2]. Swin UNETR is the state-of-the-art on Medical Segmentation
Decathlon (MSD) and Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset. In [1], a novel methodology is devised for pre-training Swin UNETR backbone in a self-supervised
manner. We provide the option for training Swin UNETR by fine-tuning from pre-trained self-supervised weights or from scratch.
The source repository for the training of these models can be found [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV).
# Installing Dependencies
Dependencies for training and inference can be installed using the model requirements :
``` bash
pip install -r requirements.txt
```
# Intended uses & limitations
You can use the raw model for dicom segmentation, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks which segment CAT scans or MRIs on images in dicom format. Dicom meta data mostly differs across medical facilities, so if applying to a new dataset, the model should be finetuned.
# How to use
To install necessary dependencies, run the below in bash.
```
git clone https://github.com/darraghdog/Project-MONAI-research-contributions pmrc
pip install -r pmrc/requirements.txt
cd pmrc/SwinUNETR/BTCV
```
To load the model from the hub.
```
>>> from swinunetr import SwinUnetrModelForInference
>>> model = SwinUnetrModelForInference.from_pretrained('darragh/swinunetr-btcv-tiny')
```
# Limitations and bias
The training data used for this model is specific to CAT scans from certain health facilities and machines. Data from other facilities may difffer in image distributions, and may require finetuning of the models for best performance.
# Evaluation results
We provide several pre-trained models on BTCV dataset in the following.
<table>
<tr>
<th>Name</th>
<th>Dice (overlap=0.7)</th>
<th>Dice (overlap=0.5)</th>
<th>Feature Size</th>
<th># params (M)</th>
<th>Self-Supervised Pre-trained </th>
</tr>
<tr>
<td>Swin UNETR/Base</td>
<td>82.25</td>
<td>81.86</td>
<td>48</td>
<td>62.1</td>
<td>Yes</td>
</tr>
<tr>
<td>Swin UNETR/Small</td>
<td>79.79</td>
<td>79.34</td>
<td>24</td>
<td>15.7</td>
<td>No</td>
</tr>
<tr>
<td>Swin UNETR/Tiny</td>
<td>72.05</td>
<td>70.35</td>
<td>12</td>
<td>4.0</td>
<td>No</td>
</tr>
</table>
# Data Preparation

The training data is from the [BTCV challenge dataset](https://www.synapse.org/#!Synapse:syn3193805/wiki/217752).
- Target: 13 abdominal organs including 1. Spleen 2. Right Kidney 3. Left Kideny 4.Gallbladder 5.Esophagus 6. Liver 7. Stomach 8.Aorta 9. IVC 10. Portal and Splenic Veins 11. Pancreas 12.Right adrenal gland 13.Left adrenal gland.
- Task: Segmentation
- Modality: CT
- Size: 30 3D volumes (24 Training + 6 Testing)
# Training
See the source repository [here](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/BTCV) for information on training.
# BibTeX entry and citation info
If you find this repository useful, please consider citing the following papers:
```
@inproceedings{tang2022self,
title={Self-supervised pre-training of swin transformers for 3d medical image analysis},
author={Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20730--20740},
year={2022}
}
@article{hatamizadeh2022swin,
title={Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images},
author={Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger and Xu, Daguang},
journal={arXiv preprint arXiv:2201.01266},
year={2022}
}
```
# References
[1]: Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V. and Hatamizadeh, A., 2022. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20730-20740).
[2]: Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.
|
big-kek/NeuroSkeptic | big-kek | 2022-07-15T20:46:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-15T16:03:23Z | ---
license: other
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-model
This model is a fine-tuned version of [facebook/opt-13b](https://huggingface.co/facebook/opt-13b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3965
- Accuracy: 0.5020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- total_train_batch_size: 72
- total_eval_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6363 | 1.0 | 3 | 3.2090 | 0.4082 |
| 2.8168 | 2.0 | 6 | 2.4805 | 0.4874 |
| 2.3529 | 3.0 | 9 | 2.4219 | 0.4915 |
| 2.1842 | 4.0 | 12 | 2.4023 | 0.4991 |
| 2.0765 | 5.0 | 15 | 2.3965 | 0.5020 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
aalbertini1990/autotrain-first-test-html-1136241676 | aalbertini1990 | 2022-07-15T17:59:11Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"en",
"dataset:aalbertini1990/autotrain-data-first-test-html",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-15T12:45:46Z | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- aalbertini1990/autotrain-data-first-test-html
co2_eq_emissions: 684.7105644305452
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1136241676
- CO2 Emissions (in grams): 684.7105644305452
## Validation Metrics
- Loss: 0.2270897775888443
- Rouge1: 63.4452
- Rouge2: 60.0038
- RougeL: 63.3343
- RougeLsum: 63.321
- Gen Len: 19.1562
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/aalbertini1990/autotrain-first-test-html-1136241676
``` |
ab93/ppo-LunarLanderv2 | ab93 | 2022-07-15T17:35:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-15T17:35:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 239.43 +/- 17.03
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RobertoFont/gpt2-large-bne-milunanoches | RobertoFont | 2022-07-15T17:22:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-15T14:48:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-large-bne-milunanoches
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-bne-milunanoches
This model is a fine-tuned version of [PlanTL-GOB-ES/gpt2-large-bne](https://huggingface.co/PlanTL-GOB-ES/gpt2-large-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.97 | 25 | 3.2210 |
| No log | 1.97 | 50 | 2.9247 |
| No log | 2.97 | 75 | 2.8850 |
| No log | 3.97 | 100 | 2.9118 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
varie/poetry-generation-nextline-mbart-all-fi-multi | varie | 2022-07-15T16:09:57Z | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | 2022-06-20T15:21:14Z | # poetry-generation-nextline-mbart-all-fi-multi
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `all`: trained on data from Project Gutenberg, Wikisource, Poesia publishing house
* `fi`: Finnish language
* `multi`: uses first, second, and third last lines as input for generation |
davanstrien/clip-roberta-finetuned | davanstrien | 2022-07-15T16:09:56Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:davanstrien/manuscript_noisy_labels_iiif",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-07-13T22:17:56Z | ---
tags:
- generated_from_trainer
datasets:
- davanstrien/manuscript_noisy_labels_iiif
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model is a fine-tuned version of [./clip-roberta](https://huggingface.co/./clip-roberta) on the davanstrien/manuscript_noisy_labels_iiif dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9841 | 0.07 | 500 | 3.4112 |
| 2.72 | 0.15 | 1000 | 3.3430 |
| 2.6319 | 0.22 | 1500 | 3.2295 |
| 2.5781 | 0.29 | 2000 | 3.1645 |
| 2.5339 | 0.36 | 2500 | 3.1226 |
| 2.503 | 0.44 | 3000 | 3.0856 |
| 2.4581 | 0.51 | 3500 | 3.0639 |
| 2.4494 | 0.58 | 4000 | 3.0415 |
| 2.4275 | 0.65 | 4500 | 3.0245 |
| 2.3909 | 0.73 | 5000 | 2.9991 |
| 2.3902 | 0.8 | 5500 | 2.9931 |
| 2.3741 | 0.87 | 6000 | 2.9612 |
| 2.3536 | 0.95 | 6500 | 2.9509 |
| 2.3392 | 1.02 | 7000 | 2.9289 |
| 2.3083 | 1.09 | 7500 | 2.9214 |
| 2.3094 | 1.16 | 8000 | 2.9153 |
| 2.2864 | 1.24 | 8500 | 2.9034 |
| 2.2893 | 1.31 | 9000 | 2.8963 |
| 2.2697 | 1.38 | 9500 | 2.8847 |
| 2.2762 | 1.46 | 10000 | 2.8665 |
| 2.2667 | 1.53 | 10500 | 2.8536 |
| 2.2548 | 1.6 | 11000 | 2.8472 |
| 2.238 | 1.67 | 11500 | 2.8491 |
| 2.2423 | 1.75 | 12000 | 2.8257 |
| 2.2406 | 1.82 | 12500 | 2.8287 |
| 2.2248 | 1.89 | 13000 | 2.8193 |
| 2.223 | 1.96 | 13500 | 2.8101 |
| 2.1995 | 2.04 | 14000 | 2.8027 |
| 2.1834 | 2.11 | 14500 | 2.7880 |
| 2.1723 | 2.18 | 15000 | 2.7783 |
| 2.1651 | 2.26 | 15500 | 2.7739 |
| 2.1575 | 2.33 | 16000 | 2.7825 |
| 2.1598 | 2.4 | 16500 | 2.7660 |
| 2.1667 | 2.47 | 17000 | 2.7578 |
| 2.1565 | 2.55 | 17500 | 2.7580 |
| 2.1558 | 2.62 | 18000 | 2.7561 |
| 2.1642 | 2.69 | 18500 | 2.7512 |
| 2.1374 | 2.77 | 19000 | 2.7361 |
| 2.1402 | 2.84 | 19500 | 2.7385 |
| 2.1326 | 2.91 | 20000 | 2.7235 |
| 2.1272 | 2.98 | 20500 | 2.7183 |
| 2.0954 | 3.06 | 21000 | 2.7156 |
| 2.0842 | 3.13 | 21500 | 2.7065 |
| 2.0859 | 3.2 | 22000 | 2.7089 |
| 2.0856 | 3.27 | 22500 | 2.6962 |
| 2.0775 | 3.35 | 23000 | 2.6931 |
| 2.0821 | 3.42 | 23500 | 2.6933 |
| 2.0706 | 3.49 | 24000 | 2.7011 |
| 2.0689 | 3.57 | 24500 | 2.7009 |
| 2.0807 | 3.64 | 25000 | 2.6825 |
| 2.0639 | 3.71 | 25500 | 2.6744 |
| 2.0742 | 3.78 | 26000 | 2.6777 |
| 2.0789 | 3.86 | 26500 | 2.6689 |
| 2.0594 | 3.93 | 27000 | 2.6566 |
| 2.056 | 4.0 | 27500 | 2.6676 |
| 2.0223 | 4.08 | 28000 | 2.6711 |
| 2.0185 | 4.15 | 28500 | 2.6568 |
| 2.018 | 4.22 | 29000 | 2.6567 |
| 2.0036 | 4.29 | 29500 | 2.6545 |
| 2.0238 | 4.37 | 30000 | 2.6559 |
| 2.0091 | 4.44 | 30500 | 2.6450 |
| 2.0096 | 4.51 | 31000 | 2.6389 |
| 2.0083 | 4.58 | 31500 | 2.6401 |
| 2.0012 | 4.66 | 32000 | 2.6399 |
| 2.0166 | 4.73 | 32500 | 2.6289 |
| 1.9963 | 4.8 | 33000 | 2.6348 |
| 1.9943 | 4.88 | 33500 | 2.6240 |
| 2.0099 | 4.95 | 34000 | 2.6190 |
| 1.9895 | 5.02 | 34500 | 2.6308 |
| 1.9581 | 5.09 | 35000 | 2.6385 |
| 1.9502 | 5.17 | 35500 | 2.6237 |
| 1.9485 | 5.24 | 36000 | 2.6248 |
| 1.9643 | 5.31 | 36500 | 2.6279 |
| 1.9535 | 5.38 | 37000 | 2.6185 |
| 1.9575 | 5.46 | 37500 | 2.6146 |
| 1.9475 | 5.53 | 38000 | 2.6093 |
| 1.9434 | 5.6 | 38500 | 2.6090 |
| 1.954 | 5.68 | 39000 | 2.6027 |
| 1.9509 | 5.75 | 39500 | 2.6107 |
| 1.9454 | 5.82 | 40000 | 2.5980 |
| 1.9479 | 5.89 | 40500 | 2.6016 |
| 1.9539 | 5.97 | 41000 | 2.5971 |
| 1.9119 | 6.04 | 41500 | 2.6228 |
| 1.8974 | 6.11 | 42000 | 2.6169 |
| 1.9038 | 6.19 | 42500 | 2.6027 |
| 1.9008 | 6.26 | 43000 | 2.6027 |
| 1.9142 | 6.33 | 43500 | 2.6011 |
| 1.8783 | 6.4 | 44000 | 2.5960 |
| 1.8896 | 6.48 | 44500 | 2.6111 |
| 1.8975 | 6.55 | 45000 | 2.5889 |
| 1.9048 | 6.62 | 45500 | 2.6007 |
| 1.9049 | 6.69 | 46000 | 2.5972 |
| 1.8969 | 6.77 | 46500 | 2.6053 |
| 1.9105 | 6.84 | 47000 | 2.5893 |
| 1.8921 | 6.91 | 47500 | 2.5883 |
| 1.8918 | 6.99 | 48000 | 2.5792 |
| 1.8671 | 7.06 | 48500 | 2.6041 |
| 1.8551 | 7.13 | 49000 | 2.6070 |
| 1.8555 | 7.2 | 49500 | 2.6148 |
| 1.8543 | 7.28 | 50000 | 2.6077 |
| 1.8485 | 7.35 | 50500 | 2.6131 |
| 1.8474 | 7.42 | 51000 | 2.6039 |
| 1.8474 | 7.5 | 51500 | 2.5973 |
| 1.8442 | 7.57 | 52000 | 2.5946 |
| 1.8329 | 7.64 | 52500 | 2.6069 |
| 1.8551 | 7.71 | 53000 | 2.5923 |
| 1.8433 | 7.79 | 53500 | 2.5922 |
| 1.851 | 7.86 | 54000 | 2.5993 |
| 1.8313 | 7.93 | 54500 | 2.5960 |
| 1.8298 | 8.0 | 55000 | 2.6058 |
| 1.8159 | 8.08 | 55500 | 2.6286 |
| 1.817 | 8.15 | 56000 | 2.6348 |
| 1.8066 | 8.22 | 56500 | 2.6411 |
| 1.7935 | 8.3 | 57000 | 2.6338 |
| 1.809 | 8.37 | 57500 | 2.6290 |
| 1.812 | 8.44 | 58000 | 2.6258 |
| 1.79 | 8.51 | 58500 | 2.6321 |
| 1.8046 | 8.59 | 59000 | 2.6291 |
| 1.7975 | 8.66 | 59500 | 2.6283 |
| 1.7968 | 8.73 | 60000 | 2.6284 |
| 1.7779 | 8.81 | 60500 | 2.6257 |
| 1.7664 | 8.88 | 61000 | 2.6232 |
| 1.792 | 8.95 | 61500 | 2.6305 |
| 1.7725 | 9.02 | 62000 | 2.6525 |
| 1.7563 | 9.1 | 62500 | 2.6794 |
| 1.7606 | 9.17 | 63000 | 2.6784 |
| 1.7666 | 9.24 | 63500 | 2.6798 |
| 1.7551 | 9.31 | 64000 | 2.6813 |
| 1.7578 | 9.39 | 64500 | 2.6830 |
| 1.7483 | 9.46 | 65000 | 2.6833 |
| 1.7431 | 9.53 | 65500 | 2.6884 |
| 1.743 | 9.61 | 66000 | 2.6932 |
| 1.7395 | 9.68 | 66500 | 2.6927 |
| 1.7473 | 9.75 | 67000 | 2.6904 |
| 1.7413 | 9.82 | 67500 | 2.6892 |
| 1.7437 | 9.9 | 68000 | 2.6898 |
| 1.7546 | 9.97 | 68500 | 2.6894 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Unix/Jxc | Unix | 2022-07-15T13:35:24Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-07-15T13:35:20Z | ---
license: bigscience-bloom-rail-1.0
---
|
fusing/ddpm-celeba-hq-ema | fusing | 2022-07-15T13:19:56Z | 5 | 1 | transformers | [
"transformers",
"ddpm_diffusion",
"arxiv:2006.11239",
"endpoints_compatible",
"region:us"
] | null | 2022-06-07T10:39:30Z | ---
tags:
- ddpm_diffusion
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Usage
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
import PIL.Image
import numpy as np
model_id = "fusing/ddpm-celeba-hq-ema"
# load model and scheduler
ddpm = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = ddpm()
# process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. 
2. 
3. 
4. 
|
fusing/ddpm-lsun-cat | fusing | 2022-07-15T13:19:43Z | 5 | 1 | transformers | [
"transformers",
"ddpm_diffusion",
"arxiv:2006.11239",
"endpoints_compatible",
"region:us"
] | null | 2022-06-06T12:21:08Z | ---
tags:
- ddpm_diffusion
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Usage
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
import PIL.Image
import numpy as np
model_id = "fusing/ddpm-lsun-cat"
# load model and scheduler
ddpm = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = ddpm()
# process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. 
2. 
3. 
4. 
|
fusing/ddpm-lsun-bedroom | fusing | 2022-07-15T13:19:24Z | 5 | 1 | transformers | [
"transformers",
"ddpm_diffusion",
"arxiv:2006.11239",
"endpoints_compatible",
"region:us"
] | null | 2022-06-06T12:21:20Z | ---
tags:
- ddpm_diffusion
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Usage
```python
# !pip install diffusers
from diffusers import DiffusionPipeline
import PIL.Image
import numpy as np
model_id = "fusing/ddpm-lsun-bedroom"
# load model and scheduler
ddpm = DiffusionPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = ddpm()
# process image to PIL
image_processed = image.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
# save image
image_pil.save("test.png")
```
## Samples
1. 
2. 
3. 
4. 
|
hamishm/distilbert-base-uncased-finetuned-squad | hamishm | 2022-07-15T11:55:51Z | 6 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-30T09:41:52Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hamishm/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hamishm/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7763
- Validation Loss: 1.1324
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 177048, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.4050 | 1.1501 | 0 |
| 0.7763 | 1.1324 | 1 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
epsil/ppo-Walker2DBulletEnv-v0 | epsil | 2022-07-15T11:53:40Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-15T11:50:47Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1968.90 +/- 16.24
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **PPO** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **PPO** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
|
spacestar1705/q-FrozenLake-v1-4x4-noSlippery | spacestar1705 | 2022-07-15T11:37:52Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-15T11:37:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="spacestar1705/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ThomasSimonini/ppo-Walker2DBulletEnv-v0 | ThomasSimonini | 2022-07-15T10:57:27Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-03-02T23:29:05Z | ---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 29.51 +/- 2.93
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **PPO** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **PPO** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
freedomking/mc-bert | freedomking | 2022-07-15T10:14:00Z | 9 | 5 | transformers | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-07-15T10:04:34Z | MC-BERT is a novel conceptualized representation learning approach for the medical domain. First, we use a different mask generation procedure to mask spans of tokens, rather than only random ones. We also introduce two kinds of masking strategies, namely whole entity masking and whole span masking. Finally, MC-BERT split the input document into segments based on the actual "sentences" provided by the user as positive samples and sample random sentences from other documents as negative samples for the next sentence prediction.

More detail:
https://github.com/alibaba-research/ChineseBLUE |
ieborhan/irisg444_4c0-Species-classification | ieborhan | 2022-07-15T07:42:25Z | 0 | 0 | sklearn | [
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] | tabular-classification | 2022-07-15T07:42:23Z | ---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on irisg444_4c0 to apply classification on Species
**Metrics of the best model:**
accuracy 0.953333
recall_macro 0.953333
precision_macro 0.956229
f1_macro 0.953216
Name: LogisticRegression(class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-2 {color: black;background-color: white;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float ... free_string useless
SepalLengthCm True False ... False False
SepalWidthCm True False ... False False
PetalLengthCm True False ... False False
PetalWidthCm True False ... False False[4 rows x 7 columns])),('logisticregression',LogisticRegression(C=1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" ><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float ... free_string useless
SepalLengthCm True False ... False False
SepalWidthCm True False ... False False
PetalLengthCm True False ... False False
PetalWidthCm True False ... False False[4 rows x 7 columns])),('logisticregression',LogisticRegression(C=1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-5" type="checkbox" ><label for="sk-estimator-id-5" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float ... free_string useless
SepalLengthCm True False ... False False
SepalWidthCm True False ... False False
PetalLengthCm True False ... False False
PetalWidthCm True False ... False False[4 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-6" type="checkbox" ><label for="sk-estimator-id-6" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt |
hugginglearners/Ethiopian-Food-Classifier | hugginglearners | 2022-07-15T07:15:52Z | 0 | 0 | fastai | [
"fastai",
"Weights & Biases",
"Ethiopian",
"Food",
"Classifier",
"region:us"
] | null | 2022-07-14T16:03:30Z | ---
tags:
- fastai
- Weights & Biases
- Ethiopian
- Food
- Classifier
---
# Model card
## Model description
"While the cuisine of Ethiopia is gradually becoming better known, it's no overstatement to say it remains one of the world's best-kept secrets." [CNN](https://edition.cnn.com/travel/article/ethiopian-food-best-dishes-africa/index.html)
This model is an Ethiopian food image classifier trained on the following food categories:
- Beyaynetu
- Chechebsa
- Doro wat
- Firfir
- Genfo
- Kikil
- Kitfo
- Shekla tibs
- Shiro wat
- Tihlo
- Tire_siga
Full report on this model can be found [here](https://wandb.ai/tinsae/Ethiopian-foods/reports/Ethiopian-Foods-Classification---VmlldzoyMzExNjk1?accessToken=hx3g5jwmlrn059f11zp5v2ktg62ygl23mkxy2tevliu6bmqsmpazp5jkmqzjrg71) |
simecek/DNADebertaBPE30k | simecek | 2022-07-15T06:45:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-14T08:39:47Z | ---
tags:
- generated_from_trainer
model-index:
- name: DNADebertaBPE30k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNADebertaBPE30k
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.1519
- eval_runtime: 308.5062
- eval_samples_per_second: 337.384
- eval_steps_per_second: 21.089
- epoch: 7.22
- step: 105695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
simecek/DNADebertaBPE10k | simecek | 2022-07-15T06:43:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-14T08:45:57Z | ---
tags:
- generated_from_trainer
model-index:
- name: DNADebertaBPE10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNADebertaBPE10k
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.7323
- eval_runtime: 283.5074
- eval_samples_per_second: 394.223
- eval_steps_per_second: 24.641
- epoch: 7.43
- step: 116731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hugginglearners/kvasir-seg | hugginglearners | 2022-07-15T05:38:17Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2022-07-14T06:47:24Z | ---
tags:
- fastai
---
# Model card
## Model description
Fastai `unet` created with `unet_learner` using `resnet34`
## Intended uses & limitations
This is only used for demonstration of fine tuning capabilities with fastai. It may be useful for further research. This model should **not** be used for gastrointestinal polyp diagnosis.
## Training and evaluation data
The model was trained on [Kvasir SEG dataset](https://datasets.simula.no/kvasir-seg/). Kvasir SEG is an open-access dataset of gastrointestinal polyp images and corresponding segmentation masks, manually annotated and verified by an experienced gastroenterologist.
20% of the data set were used as validation set and 80% as training set.
### Model training details:
#### Data pre-processing
Masks were converted to 1 bit images: 0 for background and 1 for mask using
```python
Path('/notebooks/Kvasir-SEG/masks1b-binary').mkdir(parents=True, exist_ok=True)
for img_path in tqdm(get_image_files(path/'masks')):
img = Image.open(img_path)
thresh = 127
fn = lambda x : 1 if x > thresh else 0
img1b = img.convert('L').point(fn)
img1b.save(path/'masks1b-binary'/f'{img_path.stem}.png')
```
#### Data loaders
`SegmentationDataloaders` were used to create fastai data loaders
```python
def label_func(fn): return path/'masks1b-binary'/f'{fn.stem}.png'
dls = SegmentationDataLoaders.from_label_func(
path, bs=24, fnames = get_image_files(path/'images'),
label_func = label_func,
codes = list(range(2)),
item_tfms=Resize(320),
batch_tfms=aug_transforms(size=224, flip_vert=True)
)
```
An sample of training images:

#### Learner
Create learner with Dice and JaccardCoeff metrics
```python
learn = unet_learner(dls, resnet34, metrics=[Dice, JaccardCoeff]).to_fp16()
```
#### Learning rate
Learning rate finder

#### Fine tuning
Fine tuning for 12 epochs
`learn.fine_tune(12, 1e-4)`
```
epoch train_loss valid_loss dice jaccard_coeff time
0 0.582160 0.433768 0.593044 0.421508 00:38
epoch train_loss valid_loss dice jaccard_coeff time
0 0.307588 0.261374 0.712569 0.553481 00:38
1 0.261775 0.232007 0.714458 0.555764 00:38
2 0.246054 0.227708 0.781048 0.640754 00:38
3 0.224612 0.185920 0.796701 0.662097 00:39
4 0.208768 0.179064 0.821945 0.697714 00:39
5 0.192531 0.171336 0.816464 0.689851 00:39
6 0.177166 0.167357 0.820771 0.696023 00:39
7 0.168222 0.158182 0.838388 0.721745 00:39
8 0.155157 0.161950 0.829525 0.708709 00:39
9 0.148792 0.164533 0.828383 0.707043 00:38
10 0.143541 0.158669 0.833519 0.714559 00:39
11 0.140083 0.159437 0.832745 0.713422 00:38
```

#### Results
Visualization of results
Target/Prediction

Top losses

#### Libraries used:
`huggingface_hub.__version__`
`'0.8.1'`
`fastai.__version__`
`'2.6.3'` |
Team-PIXEL/pixel-base-finetuned-wnli | Team-PIXEL | 2022-07-15T03:09:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-15T03:06:10Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-wnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-wnli
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE WNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 400
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-cola | Team-PIXEL | 2022-07-15T02:38:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-15T02:35:10Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-cola
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE COLA dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100.0
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
CennetOguz/bert-large-uncased-finetuned-youcook_4 | CennetOguz | 2022-07-15T00:43:32Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-15T00:34:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-youcook_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-youcook_4
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3915 | 1.0 | 206 | 2.1036 |
| 2.0412 | 2.0 | 412 | 2.2207 |
| 1.9062 | 3.0 | 618 | 1.7281 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
CennetOguz/bert-large-uncased-finetuned-youcook_2 | CennetOguz | 2022-07-15T00:16:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-15T00:08:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-youcook_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-youcook_2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3915 | 1.0 | 206 | 2.1036 |
| 2.0412 | 2.0 | 412 | 2.2207 |
| 1.9062 | 3.0 | 618 | 1.7281 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cannlytics/skunkfx | cannlytics | 2022-07-14T21:01:54Z | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | 2022-07-14T20:54:17Z | ---
license: mit
---
# Predicting Effects and Aromas
<div align="center" style="text-align:center; margin-top:1rem; margin-bottom: 1rem;">
<img width="240px" alt="" src="https://firebasestorage.googleapis.com/v0/b/cannlytics.appspot.com/o/public%2Fimages%2Flogos%2Fskunkfx_logo.png?alt=media&token=1a75b3cc-3230-446c-be7d-5c06012c8e30">
</div>
> "It's been hard to breathe and the smell's been just horrendous... [It's] like you've literally been sprayed by a
**skunk**." - Resident of Prague, Oklahoma in
[*'It's nasty': Prague neighbors push back on area cannabis facility*](https://kfor.com/news/local/its-nasty-prague-neighbors-push-back-on-area-cannabis-facility/), Oklahoma News 4 (2022).
## Objective
Can we build a model to **predict** if someone may *report* specific **effects** or **aromas** given a cannabis product’s **lab results**?
## Literature
[Over eight hundred cannabis strains characterized by the relationship between their psychoactive effects,
perceptual profiles, and chemical compositions](https://www.biorxiv.org/content/10.1101/759696v1.abstract) by Laura Alethia de la Fuente, Federico Zamberlan, Andres Sanchez, Facundo Carrillo, Enzo Tagliazucchi, Carla Pallavicini (2019).
* **Claim**: *"While cannabinoid content was variable even within individual strains, terpene profiles matched the perceptual characterizations made by the users and could be used to predict associations between different psychoactive effects."*
## Data
A panel of strain reviews was curated from the data published by [Alethia, et. al. (2019)](https://data.mendeley.com/datasets/6zwcgrttkp/1). First, we downloaded the authors' strain review and lab result datasets. We then curated terpene and cannabinoid data from the raw text files in the lab result dataset. Average cannabinoid and terpene concentrations were calculated for each of the 184 strains in the dataset from 431 lab results. Reviews are for purported strains and the lab results may or may not be representative of the concentration of the product that the reviewer is referencing. However, without the actual lab results of the product that the reviewer is referencing, the average concentrations for similarly named products can serve as an estimate. The following processing and assumptions were applied.
- Field names were transformed to `snake_case`.
- The fields `total_terpenes` and `total_cannabinoids` were calculated as the simple sum of all terpenes and cannabinoids respectively.
- The fields `total_thc`, `total_cbd`, and `total_cbg` were calculated using the decarboxylation rate (87.7%) for THCA, CBDA, and CBGA.
- Observations with `total_cannabinoids` greater than 35% or `total_terpenes` greater than 6% were presumed to be outliers and were excluded.
- The field `classification` was determined by the original authors from natural language processing (NLP) and can take a value of `sativa`, `indica`, or `hybrid` depending on the language in the reviewer's description.
- Fields for each reported aroma and effect were created and assigned a value of 1 if the reviewer reported the aroma or effect and 0 otherwise.
- Terpenes of similar names were combined on missing values: `p_cymene` with `pcymene`, `beta_caryophyllene` with `caryophyllene`, and `humulene` with `alpha_humulene`.
- Certain terpenes were summed into a encompassing field: `ocimene`, `beta_ocimene`, `trans_ocimene` to `ocimene` and `trans_nerolidol`, `cis_nerolidol`, `transnerolidol_1`, `transnerolidol_2` to `nerolidol`.
- A new field, `terpinenes`, was created as the sum of `alpha_terpinene`, `gamma_terpinene`, `terpinolene`, and `terpinene`.
| Datasets | URL |
|----------|-----|
| Raw data | <https://data.mendeley.com/datasets/6zwcgrttkp/1> |
| Curated panel data | <https://cannlytics.page.link/reported-effects> |
| Potential strain effects data | <https://cannlytics.page.link/strain-effects> |
<!-- TODO: Add WA and CT (OH?) datasets :) -->
## Methodology
A [multivariate probit model](https://en.wikipedia.org/wiki/Multivariate_probit_model) is used to predict the probability of all potential effects and aromas simultaneously given lab results for a sample or samples. Specific effects and aromas are predicted to be reported when the estimated probability of an effect or aroma crosses a threshold. The thresholds are set to best fit the observed occurrence of each effect and aroma. Below are the variates used in the models estimated.
```json
{
"full": [
"cbc",
"cbd",
"cbda",
"cbg",
"cbga",
"cbn",
"delta_8_thc",
"delta_9_thc",
"thca",
"thcv",
"alpha_bisabolol",
"alpha_pinene",
"alpha_terpinene",
"beta_caryophyllene",
"beta_myrcene",
"beta_pinene",
"camphene",
"carene",
"caryophyllene_oxide",
"d_limonene",
"eucalyptol",
"gamma_terpinene",
"geraniol",
"guaiol",
"humulene",
"isopulegol",
"linalool",
"nerolidol",
"ocimene",
"p_cymene",
"terpinene",
"terpinolene"
],
"terpene_only": [
"alpha_bisabolol",
"alpha_pinene",
"alpha_terpinene",
"beta_caryophyllene",
"beta_myrcene",
"beta_pinene",
"camphene",
"carene",
"caryophyllene_oxide",
"d_limonene",
"eucalyptol",
"gamma_terpinene",
"geraniol",
"guaiol",
"humulene",
"isopulegol",
"linalool",
"nerolidol",
"ocimene",
"p_cymene",
"terpinene",
"terpinolene"
],
"cannabinoid_only": [
"cbc",
"cbd",
"cbda",
"cbg",
"cbga",
"cbn",
"delta_8_thc",
"delta_9_thc",
"thca",
"thcv"
],
"totals": ["total_cbd", "total_thc", "total_terpenes"],
"simple": ["total_cbd", "total_thc"]
}
```
## Results
An implementation of the prediction model can be found at <https://cannlytics.com/effects> and utilized through the API endpoint <https://cannlytics.com/api/stats/effects>. In general, there are 3 main actions:
1. You can use the model to predict potentially reported effects and aromas for any cannabis flower for which you have lab results. Simply post your lab results to the `/stats/effects` endpoint, specifying your model if you desire, and you will receive effect and aroma predictions.
2. You can get the model statistics by making a `GET` request to `/stats/effects`. Currently, the model statistics include `false_positive_rate`, `false_negative_rate`, `true_positive_rate`, `true_negative_rate`, `accuracy`, and `informedness`.
3. Finally, you can post the actual effects and aromas that you may observe with the `/stats/effects/actual` endpoint.
You can substitute training data, for strain reviews or lab results, as you see fit. Please see the API documentation for more information about using this API endpoint.
## Insights and future work
The more training data the better. If you want to [contribute lab results or reviews](https://cannlytics.com/stats/effects), then you are welcome! You can also use your own training data. Using the model to predict out-of-sample helps make the model robust. Please feel free to report your use of the model and its accuracy in the wild to <[email protected]>. Lastly, but most importantly, remember that the predictions are for the probability of effects and aromas being reported by the observed sample given observed lab results. Extrapolations beyond the ranges of observed values aren't valid and all statistics should be taken at face value. Thank you and good fortune!
## Disclaimer
```
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
nakamura196/roberta-small-hi-char | nakamura196 | 2022-07-14T20:32:40Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-11T06:35:00Z | ---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "入[MASK]外無之候江戸大水又ハ大地震なと"
- text: "日向[MASK]御望之由可令披露候"
---
# roberta-small-hi-char
## Model Description
This is a RoBERTa model pre-trained on HI texts with character tokenizer.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nakamura196/roberta-small-hi-char")
model=AutoModelForMaskedLM.from_pretrained("nakamura196/roberta-small-hi-char")
```
|
Juliano/fault_injection_mlaas | Juliano | 2022-07-14T20:20:00Z | 0 | 0 | null | [
"region:us"
] | null | 2022-07-14T19:40:59Z | Hosts the pre-tained extracted model from glove.twitter.27B.100d.txt from https://huggingface.co/stanfordnlp/glove/tree/main
Used in: https://github.com/Juliano-rb/experiments_fault_injection_mlaas |
aatmasidha/newsmodelclassification | aatmasidha | 2022-07-14T20:16:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-12T08:59:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: newsmodelclassification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9271124951673986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsmodelclassification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2065
- Accuracy: 0.927
- F1: 0.9271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8011 | 1.0 | 250 | 0.2902 | 0.911 | 0.9090 |
| 0.2316 | 2.0 | 500 | 0.2065 | 0.927 | 0.9271 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
Team-PIXEL/pixel-base-finetuned-sst2 | Team-PIXEL | 2022-07-14T19:18:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T19:14:45Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: pixel-base-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-sst2
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the GLUE SST2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
kuttersn/gpt2_chatbot | kuttersn | 2022-07-14T19:04:01Z | 35 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-13T03:00:29Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gpt2_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_chatbot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5732
- Accuracy: 0.3909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
liyijing024/swin-base-patch4-window7-224-in22k-Chinese-finetuned | liyijing024 | 2022-07-14T18:04:48Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-14T17:28:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-base-patch4-window7-224-in22k-Chinese-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-in22k-Chinese-finetuned
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0121 | 0.99 | 140 | 0.0001 | 1.0 |
| 0.0103 | 1.99 | 280 | 0.0001 | 1.0 |
| 0.0049 | 2.99 | 420 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.8.0+cu111
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
benji2264/ppo-LunarLander-v2 | benji2264 | 2022-07-14T18:01:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-14T18:00:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 246.41 +/- 23.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yunbaree/distilbert-base-uncased-finetuned-emotion | yunbaree | 2022-07-14T16:27:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T16:01:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240032665380036
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.843 | 1.0 | 250 | 0.3250 | 0.906 | 0.9041 |
| 0.254 | 2.0 | 500 | 0.2244 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
GuoLiyong/dan_sharing_2022_baai | GuoLiyong | 2022-07-14T16:19:42Z | 0 | 0 | null | [
"region:us"
] | null | 2022-07-14T07:02:42Z | dan's sharing on 2022 BAAI, Beijing 1 tmux a -t 1
Verify md5value:
tar -zxvf Daniel_Povey_BAAI_2022.tar.gz
md5sum Daniel_Povey_BAAI_2022.mp4
# 1d0b9f941fc30814528c95bc7630b6a8 Daniel_Povey_BAAI_2022.mp4
|
ericklerouge123/xlm-roberta-base-finetuned-panx-de-fr | ericklerouge123 | 2022-07-14T16:17:52Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-14T14:59:42Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6886160714285715
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- F1: 0.6886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 |
| 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 |
| 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Siyong/MT | Siyong | 2022-07-14T15:59:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-13T05:57:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-base-Millad_TIMIT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-base-Millad_TIMIT
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3772
- Wer: 0.6859
- Cer: 0.3217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 2.36 | 2000 | 2.6233 | 1.0130 | 0.6241 |
| No log | 4.73 | 4000 | 2.2206 | 0.9535 | 0.5032 |
| No log | 7.09 | 6000 | 2.3036 | 0.9368 | 0.5063 |
| 1.235 | 9.46 | 8000 | 1.9932 | 0.9275 | 0.5032 |
| 1.235 | 11.82 | 10000 | 2.0207 | 0.8922 | 0.4498 |
| 1.235 | 14.18 | 12000 | 1.6171 | 0.7993 | 0.3976 |
| 1.235 | 16.55 | 14000 | 1.6729 | 0.8309 | 0.4209 |
| 0.2779 | 18.91 | 16000 | 1.7043 | 0.8141 | 0.4340 |
| 0.2779 | 21.28 | 18000 | 1.7426 | 0.7658 | 0.3960 |
| 0.2779 | 23.64 | 20000 | 1.5230 | 0.7361 | 0.3830 |
| 0.2779 | 26.0 | 22000 | 1.4286 | 0.7658 | 0.3794 |
| 0.1929 | 28.37 | 24000 | 1.4450 | 0.7379 | 0.3644 |
| 0.1929 | 30.73 | 26000 | 1.5922 | 0.7491 | 0.3826 |
| 0.1929 | 33.1 | 28000 | 1.4443 | 0.7454 | 0.3617 |
| 0.1929 | 35.46 | 30000 | 1.5450 | 0.7268 | 0.3621 |
| 0.1394 | 37.83 | 32000 | 1.9268 | 0.7491 | 0.3763 |
| 0.1394 | 40.19 | 34000 | 1.7094 | 0.7342 | 0.3783 |
| 0.1394 | 42.55 | 36000 | 1.4024 | 0.7082 | 0.3494 |
| 0.1394 | 44.92 | 38000 | 1.4467 | 0.6840 | 0.3395 |
| 0.104 | 47.28 | 40000 | 1.4145 | 0.6933 | 0.3407 |
| 0.104 | 49.65 | 42000 | 1.3901 | 0.6970 | 0.3403 |
| 0.104 | 52.01 | 44000 | 1.3589 | 0.6636 | 0.3348 |
| 0.104 | 54.37 | 46000 | 1.3716 | 0.6952 | 0.3340 |
| 0.0781 | 56.74 | 48000 | 1.4025 | 0.6896 | 0.3312 |
| 0.0781 | 59.1 | 50000 | 1.3772 | 0.6859 | 0.3217 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-korquadv1 | Team-PIXEL | 2022-07-14T15:58:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"question-answering",
"generated_from_trainer",
"dataset:squad_kor_v1",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-14T15:55:25Z | ---
tags:
- generated_from_trainer
datasets:
- squad_kor_v1
model-index:
- name: pixel-base-finetuned-korquadv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-korquadv1
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the squad_kor_v1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 45
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
neulab/gpt2-finetuned-wikitext103 | neulab | 2022-07-14T15:38:21Z | 323 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2201.12431",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-12T14:37:59Z | This is a `gpt2` model, finetuned on the Wikitext-103 dataset.
It achieves a perplexity of **14.84** using a "sliding window" context, using the `run_clm.py` script at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers).
| Base LM: | `distilgpt2` | `gpt2` |
| :--- | ----: | ---: |
| base perplexity | 18.25 | 14.84 |
| +kNN-LM | 15.03 | 12.57 |
| +RetoMaton | **14.70** | **12.46** |
This model was released as part of the paper ["Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval"](https://arxiv.org/pdf/2201.12431.pdf) (ICML'2022).
For more information, see: [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers)
If you use this model, please cite:
```
@inproceedings{alon2022neuro,
title={Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval},
author={Alon, Uri and Xu, Frank and He, Junxian and Sengupta, Sudipta and Roth, Dan and Neubig, Graham},
booktitle={International Conference on Machine Learning},
pages={468--485},
year={2022},
organization={PMLR}
}
``` |
jslowik/distilbert-base-uncased-finetuned-emotion | jslowik | 2022-07-14T15:05:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T15:01:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9262423473736914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.9265
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3075 | 0.907 | 0.9048 |
| 0.2481 | 2.0 | 500 | 0.2156 | 0.9265 | 0.9262 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gossminn/predict-perception-bertino-focus-object | gossminn | 2022-07-14T14:46:13Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T14:42:20Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bertino-focus-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-focus-object
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2766
- R2: 0.5460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4798 | 1.0 | 14 | 0.4519 | 0.2581 |
| 0.2481 | 2.0 | 28 | 0.3042 | 0.5007 |
| 0.12 | 3.0 | 42 | 0.3746 | 0.3851 |
| 0.0969 | 4.0 | 56 | 0.3186 | 0.4770 |
| 0.0907 | 5.0 | 70 | 0.3727 | 0.3882 |
| 0.0673 | 6.0 | 84 | 0.2847 | 0.5327 |
| 0.0457 | 7.0 | 98 | 0.3141 | 0.4844 |
| 0.0431 | 8.0 | 112 | 0.3369 | 0.4470 |
| 0.028 | 9.0 | 126 | 0.3039 | 0.5012 |
| 0.0244 | 10.0 | 140 | 0.2964 | 0.5135 |
| 0.0201 | 11.0 | 154 | 0.3072 | 0.4958 |
| 0.0153 | 12.0 | 168 | 0.3049 | 0.4995 |
| 0.0155 | 13.0 | 182 | 0.2924 | 0.5201 |
| 0.015 | 14.0 | 196 | 0.2585 | 0.5757 |
| 0.0181 | 15.0 | 210 | 0.3258 | 0.4652 |
| 0.0136 | 16.0 | 224 | 0.3142 | 0.4842 |
| 0.0105 | 17.0 | 238 | 0.2536 | 0.5837 |
| 0.0104 | 18.0 | 252 | 0.2407 | 0.6050 |
| 0.0107 | 19.0 | 266 | 0.2727 | 0.5524 |
| 0.0084 | 20.0 | 280 | 0.3117 | 0.4883 |
| 0.0102 | 21.0 | 294 | 0.2999 | 0.5078 |
| 0.0074 | 22.0 | 308 | 0.3018 | 0.5047 |
| 0.0068 | 23.0 | 322 | 0.2826 | 0.5361 |
| 0.0054 | 24.0 | 336 | 0.2804 | 0.5398 |
| 0.0044 | 25.0 | 350 | 0.2912 | 0.5220 |
| 0.0048 | 26.0 | 364 | 0.2813 | 0.5382 |
| 0.005 | 27.0 | 378 | 0.2933 | 0.5186 |
| 0.0046 | 28.0 | 392 | 0.2820 | 0.5371 |
| 0.004 | 29.0 | 406 | 0.2717 | 0.5541 |
| 0.0054 | 30.0 | 420 | 0.2717 | 0.5540 |
| 0.0042 | 31.0 | 434 | 0.2699 | 0.5570 |
| 0.0033 | 32.0 | 448 | 0.2630 | 0.5684 |
| 0.0038 | 33.0 | 462 | 0.2578 | 0.5767 |
| 0.0032 | 34.0 | 476 | 0.2687 | 0.5589 |
| 0.004 | 35.0 | 490 | 0.2737 | 0.5507 |
| 0.0031 | 36.0 | 504 | 0.2753 | 0.5481 |
| 0.0037 | 37.0 | 518 | 0.2819 | 0.5373 |
| 0.0034 | 38.0 | 532 | 0.2759 | 0.5471 |
| 0.0034 | 39.0 | 546 | 0.2835 | 0.5347 |
| 0.0029 | 40.0 | 560 | 0.2814 | 0.5381 |
| 0.0033 | 41.0 | 574 | 0.2801 | 0.5403 |
| 0.0025 | 42.0 | 588 | 0.2759 | 0.5472 |
| 0.0029 | 43.0 | 602 | 0.2790 | 0.5421 |
| 0.0028 | 44.0 | 616 | 0.2801 | 0.5401 |
| 0.003 | 45.0 | 630 | 0.2772 | 0.5451 |
| 0.0028 | 46.0 | 644 | 0.2764 | 0.5463 |
| 0.0026 | 47.0 | 658 | 0.2766 | 0.5460 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gossminn/predict-perception-bertino-focus-assassin | gossminn | 2022-07-14T14:34:40Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T14:26:46Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bertino-focus-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-focus-assassin
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3409
- R2: 0.3205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5573 | 1.0 | 14 | 0.4856 | 0.0321 |
| 0.1739 | 2.0 | 28 | 0.4735 | 0.0562 |
| 0.0813 | 3.0 | 42 | 0.3416 | 0.3191 |
| 0.0764 | 4.0 | 56 | 0.3613 | 0.2799 |
| 0.0516 | 5.0 | 70 | 0.3264 | 0.3495 |
| 0.0459 | 6.0 | 84 | 0.4193 | 0.1643 |
| 0.0414 | 7.0 | 98 | 0.3502 | 0.3019 |
| 0.028 | 8.0 | 112 | 0.3361 | 0.3301 |
| 0.0281 | 9.0 | 126 | 0.3610 | 0.2804 |
| 0.027 | 10.0 | 140 | 0.3523 | 0.2978 |
| 0.0216 | 11.0 | 154 | 0.3440 | 0.3143 |
| 0.0181 | 12.0 | 168 | 0.3506 | 0.3012 |
| 0.013 | 13.0 | 182 | 0.3299 | 0.3424 |
| 0.0116 | 14.0 | 196 | 0.3611 | 0.2803 |
| 0.0118 | 15.0 | 210 | 0.3505 | 0.3013 |
| 0.0139 | 16.0 | 224 | 0.3529 | 0.2967 |
| 0.0099 | 17.0 | 238 | 0.3536 | 0.2952 |
| 0.0096 | 18.0 | 252 | 0.3542 | 0.2941 |
| 0.0107 | 19.0 | 266 | 0.3770 | 0.2486 |
| 0.0088 | 20.0 | 280 | 0.3467 | 0.3091 |
| 0.0065 | 21.0 | 294 | 0.3327 | 0.3369 |
| 0.0073 | 22.0 | 308 | 0.3479 | 0.3066 |
| 0.0062 | 23.0 | 322 | 0.3566 | 0.2893 |
| 0.0063 | 24.0 | 336 | 0.3503 | 0.3019 |
| 0.0057 | 25.0 | 350 | 0.3371 | 0.3282 |
| 0.0049 | 26.0 | 364 | 0.3334 | 0.3355 |
| 0.0045 | 27.0 | 378 | 0.3399 | 0.3225 |
| 0.0049 | 28.0 | 392 | 0.3379 | 0.3266 |
| 0.0049 | 29.0 | 406 | 0.3377 | 0.3268 |
| 0.0055 | 30.0 | 420 | 0.3357 | 0.3309 |
| 0.005 | 31.0 | 434 | 0.3394 | 0.3235 |
| 0.0046 | 32.0 | 448 | 0.3432 | 0.3159 |
| 0.0048 | 33.0 | 462 | 0.3427 | 0.3169 |
| 0.0041 | 34.0 | 476 | 0.3450 | 0.3123 |
| 0.0041 | 35.0 | 490 | 0.3436 | 0.3151 |
| 0.0051 | 36.0 | 504 | 0.3394 | 0.3234 |
| 0.0037 | 37.0 | 518 | 0.3370 | 0.3283 |
| 0.004 | 38.0 | 532 | 0.3370 | 0.3284 |
| 0.0033 | 39.0 | 546 | 0.3339 | 0.3344 |
| 0.0034 | 40.0 | 560 | 0.3335 | 0.3352 |
| 0.003 | 41.0 | 574 | 0.3373 | 0.3276 |
| 0.0035 | 42.0 | 588 | 0.3380 | 0.3264 |
| 0.0032 | 43.0 | 602 | 0.3382 | 0.3259 |
| 0.0034 | 44.0 | 616 | 0.3432 | 0.3158 |
| 0.003 | 45.0 | 630 | 0.3421 | 0.3181 |
| 0.0027 | 46.0 | 644 | 0.3410 | 0.3203 |
| 0.0037 | 47.0 | 658 | 0.3409 | 0.3205 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gossminn/predict-perception-bertino-cause-none | gossminn | 2022-07-14T14:26:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T14:22:28Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bertino-cause-none
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-cause-none
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1988
- R2: 0.4467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.56 | 1.0 | 14 | 0.3460 | 0.0372 |
| 0.3752 | 2.0 | 28 | 0.3082 | 0.1423 |
| 0.147 | 3.0 | 42 | 0.2299 | 0.3603 |
| 0.0961 | 4.0 | 56 | 0.3254 | 0.0944 |
| 0.0859 | 5.0 | 70 | 0.2650 | 0.2625 |
| 0.0735 | 6.0 | 84 | 0.2430 | 0.3237 |
| 0.042 | 7.0 | 98 | 0.2567 | 0.2856 |
| 0.0328 | 8.0 | 112 | 0.2092 | 0.4180 |
| 0.028 | 9.0 | 126 | 0.2262 | 0.3706 |
| 0.0237 | 10.0 | 140 | 0.2170 | 0.3960 |
| 0.0235 | 11.0 | 154 | 0.2137 | 0.4054 |
| 0.0195 | 12.0 | 168 | 0.2009 | 0.4409 |
| 0.0217 | 13.0 | 182 | 0.2001 | 0.4431 |
| 0.0176 | 14.0 | 196 | 0.2123 | 0.4091 |
| 0.0226 | 15.0 | 210 | 0.2076 | 0.4224 |
| 0.019 | 16.0 | 224 | 0.1920 | 0.4657 |
| 0.0122 | 17.0 | 238 | 0.2301 | 0.3598 |
| 0.0121 | 18.0 | 252 | 0.2092 | 0.4178 |
| 0.0112 | 19.0 | 266 | 0.2038 | 0.4329 |
| 0.0081 | 20.0 | 280 | 0.2008 | 0.4411 |
| 0.0079 | 21.0 | 294 | 0.1930 | 0.4631 |
| 0.0083 | 22.0 | 308 | 0.2076 | 0.4222 |
| 0.0061 | 23.0 | 322 | 0.2036 | 0.4334 |
| 0.0057 | 24.0 | 336 | 0.1986 | 0.4472 |
| 0.0059 | 25.0 | 350 | 0.2079 | 0.4215 |
| 0.0082 | 26.0 | 364 | 0.2125 | 0.4087 |
| 0.0093 | 27.0 | 378 | 0.2096 | 0.4168 |
| 0.0061 | 28.0 | 392 | 0.2129 | 0.4076 |
| 0.005 | 29.0 | 406 | 0.2054 | 0.4284 |
| 0.0058 | 30.0 | 420 | 0.2024 | 0.4368 |
| 0.006 | 31.0 | 434 | 0.1999 | 0.4437 |
| 0.0047 | 32.0 | 448 | 0.1917 | 0.4666 |
| 0.0046 | 33.0 | 462 | 0.2000 | 0.4435 |
| 0.005 | 34.0 | 476 | 0.2003 | 0.4425 |
| 0.0041 | 35.0 | 490 | 0.2057 | 0.4276 |
| 0.0037 | 36.0 | 504 | 0.1985 | 0.4476 |
| 0.0049 | 37.0 | 518 | 0.2029 | 0.4353 |
| 0.0031 | 38.0 | 532 | 0.1963 | 0.4539 |
| 0.0031 | 39.0 | 546 | 0.1957 | 0.4554 |
| 0.0031 | 40.0 | 560 | 0.1962 | 0.4540 |
| 0.0029 | 41.0 | 574 | 0.2000 | 0.4433 |
| 0.0028 | 42.0 | 588 | 0.1986 | 0.4473 |
| 0.0035 | 43.0 | 602 | 0.1972 | 0.4514 |
| 0.0029 | 44.0 | 616 | 0.1984 | 0.4479 |
| 0.0036 | 45.0 | 630 | 0.2005 | 0.4422 |
| 0.0033 | 46.0 | 644 | 0.1994 | 0.4452 |
| 0.0029 | 47.0 | 658 | 0.1988 | 0.4467 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gossminn/predict-perception-bertino-cause-concept | gossminn | 2022-07-14T14:22:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T14:15:23Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bertino-cause-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-cause-concept
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2035
- R2: -0.3662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3498 | 1.0 | 14 | 0.1845 | -0.2382 |
| 0.2442 | 2.0 | 28 | 0.1575 | -0.0573 |
| 0.1553 | 3.0 | 42 | 0.2216 | -0.4872 |
| 0.0726 | 4.0 | 56 | 0.1972 | -0.3234 |
| 0.0564 | 5.0 | 70 | 0.2832 | -0.9009 |
| 0.0525 | 6.0 | 84 | 0.1854 | -0.2444 |
| 0.0385 | 7.0 | 98 | 0.2816 | -0.8900 |
| 0.0257 | 8.0 | 112 | 0.1815 | -0.2183 |
| 0.03 | 9.0 | 126 | 0.3065 | -1.0576 |
| 0.0275 | 10.0 | 140 | 0.1991 | -0.3367 |
| 0.0175 | 11.0 | 154 | 0.2400 | -0.6110 |
| 0.017 | 12.0 | 168 | 0.1915 | -0.2856 |
| 0.0158 | 13.0 | 182 | 0.2008 | -0.3477 |
| 0.0127 | 14.0 | 196 | 0.1932 | -0.2968 |
| 0.009 | 15.0 | 210 | 0.2500 | -0.6783 |
| 0.0078 | 16.0 | 224 | 0.1969 | -0.3215 |
| 0.0075 | 17.0 | 238 | 0.1857 | -0.2463 |
| 0.0079 | 18.0 | 252 | 0.2405 | -0.6145 |
| 0.0089 | 19.0 | 266 | 0.1865 | -0.2517 |
| 0.0082 | 20.0 | 280 | 0.2275 | -0.5267 |
| 0.0078 | 21.0 | 294 | 0.1890 | -0.2687 |
| 0.0072 | 22.0 | 308 | 0.2230 | -0.4965 |
| 0.0064 | 23.0 | 322 | 0.2286 | -0.5346 |
| 0.0052 | 24.0 | 336 | 0.2154 | -0.4457 |
| 0.0049 | 25.0 | 350 | 0.1901 | -0.2757 |
| 0.0062 | 26.0 | 364 | 0.1917 | -0.2870 |
| 0.0043 | 27.0 | 378 | 0.2042 | -0.3704 |
| 0.0038 | 28.0 | 392 | 0.2251 | -0.5110 |
| 0.0049 | 29.0 | 406 | 0.2092 | -0.4040 |
| 0.0044 | 30.0 | 420 | 0.2119 | -0.4221 |
| 0.0041 | 31.0 | 434 | 0.2018 | -0.3542 |
| 0.0039 | 32.0 | 448 | 0.1875 | -0.2586 |
| 0.0038 | 33.0 | 462 | 0.1980 | -0.3291 |
| 0.0038 | 34.0 | 476 | 0.2071 | -0.3903 |
| 0.0043 | 35.0 | 490 | 0.1998 | -0.3412 |
| 0.0043 | 36.0 | 504 | 0.2052 | -0.3771 |
| 0.004 | 37.0 | 518 | 0.2143 | -0.4382 |
| 0.004 | 38.0 | 532 | 0.1977 | -0.3273 |
| 0.0039 | 39.0 | 546 | 0.2002 | -0.3439 |
| 0.0034 | 40.0 | 560 | 0.2035 | -0.3659 |
| 0.0036 | 41.0 | 574 | 0.1994 | -0.3387 |
| 0.0029 | 42.0 | 588 | 0.2036 | -0.3667 |
| 0.0032 | 43.0 | 602 | 0.2055 | -0.3797 |
| 0.0029 | 44.0 | 616 | 0.2025 | -0.3593 |
| 0.0027 | 45.0 | 630 | 0.2047 | -0.3743 |
| 0.0033 | 46.0 | 644 | 0.2067 | -0.3877 |
| 0.0027 | 47.0 | 658 | 0.2035 | -0.3662 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
gossminn/predict-perception-bertino-cause-object | gossminn | 2022-07-14T14:14:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-14T14:06:37Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-bertino-cause-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-cause-object
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0766
- R2: 0.8216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6807 | 1.0 | 14 | 0.4011 | 0.0652 |
| 0.3529 | 2.0 | 28 | 0.2304 | 0.4631 |
| 0.1539 | 3.0 | 42 | 0.0596 | 0.8611 |
| 0.0853 | 4.0 | 56 | 0.1600 | 0.6272 |
| 0.066 | 5.0 | 70 | 0.1596 | 0.6280 |
| 0.0563 | 6.0 | 84 | 0.1146 | 0.7330 |
| 0.0777 | 7.0 | 98 | 0.1010 | 0.7646 |
| 0.0299 | 8.0 | 112 | 0.0897 | 0.7910 |
| 0.0311 | 9.0 | 126 | 0.0832 | 0.8061 |
| 0.0274 | 10.0 | 140 | 0.0988 | 0.7697 |
| 0.0262 | 11.0 | 154 | 0.1048 | 0.7557 |
| 0.0204 | 12.0 | 168 | 0.0615 | 0.8566 |
| 0.0254 | 13.0 | 182 | 0.0742 | 0.8270 |
| 0.0251 | 14.0 | 196 | 0.0923 | 0.7850 |
| 0.0149 | 15.0 | 210 | 0.0663 | 0.8456 |
| 0.0141 | 16.0 | 224 | 0.0755 | 0.8241 |
| 0.0112 | 17.0 | 238 | 0.0905 | 0.7891 |
| 0.0108 | 18.0 | 252 | 0.0834 | 0.8057 |
| 0.0096 | 19.0 | 266 | 0.0823 | 0.8082 |
| 0.0073 | 20.0 | 280 | 0.0825 | 0.8078 |
| 0.0092 | 21.0 | 294 | 0.0869 | 0.7974 |
| 0.0075 | 22.0 | 308 | 0.0744 | 0.8266 |
| 0.0075 | 23.0 | 322 | 0.0825 | 0.8078 |
| 0.0062 | 24.0 | 336 | 0.0797 | 0.8144 |
| 0.0065 | 25.0 | 350 | 0.0793 | 0.8152 |
| 0.007 | 26.0 | 364 | 0.0840 | 0.8043 |
| 0.0067 | 27.0 | 378 | 0.0964 | 0.7753 |
| 0.0064 | 28.0 | 392 | 0.0869 | 0.7976 |
| 0.0063 | 29.0 | 406 | 0.0766 | 0.8215 |
| 0.0057 | 30.0 | 420 | 0.0764 | 0.8219 |
| 0.0057 | 31.0 | 434 | 0.0796 | 0.8145 |
| 0.0054 | 32.0 | 448 | 0.0853 | 0.8012 |
| 0.0044 | 33.0 | 462 | 0.0750 | 0.8253 |
| 0.0072 | 34.0 | 476 | 0.0782 | 0.8179 |
| 0.006 | 35.0 | 490 | 0.0867 | 0.7979 |
| 0.0054 | 36.0 | 504 | 0.0819 | 0.8092 |
| 0.0047 | 37.0 | 518 | 0.0839 | 0.8045 |
| 0.0043 | 38.0 | 532 | 0.0764 | 0.8221 |
| 0.0039 | 39.0 | 546 | 0.0728 | 0.8303 |
| 0.0041 | 40.0 | 560 | 0.0755 | 0.8241 |
| 0.0038 | 41.0 | 574 | 0.0729 | 0.8301 |
| 0.0034 | 42.0 | 588 | 0.0781 | 0.8180 |
| 0.0038 | 43.0 | 602 | 0.0762 | 0.8224 |
| 0.0032 | 44.0 | 616 | 0.0777 | 0.8189 |
| 0.0035 | 45.0 | 630 | 0.0776 | 0.8191 |
| 0.0037 | 46.0 | 644 | 0.0765 | 0.8217 |
| 0.0036 | 47.0 | 658 | 0.0766 | 0.8216 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ericklerouge123/xlm-roberta-base-finetuned-panx-de | ericklerouge123 | 2022-07-14T14:05:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-17T20:42:35Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Team-PIXEL/pixel-base-finetuned-squadv1 | Team-PIXEL | 2022-07-14T13:05:00Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-14T13:00:33Z | ---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: pixel-base-finetuned-squadv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-squad-v1
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
Team-PIXEL/pixel-base-finetuned-tydiqa-goldp | Team-PIXEL | 2022-07-14T12:54:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-14T12:35:12Z | ---
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: pixel-base-finetuned-tydiqa-goldp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pixel-base-finetuned-tydiqa-goldp
This model is a fine-tuned version of [Team-PIXEL/pixel-base](https://huggingface.co/Team-PIXEL/pixel-base) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 20000
- mixed_precision_training: Apex, opt level O1
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.12.1
|
jgriffi/bart_abstract_summarization | jgriffi | 2022-07-14T12:28:07Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-14T09:13:23Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart_abstract_summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_abstract_summarization
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0559 | 0.25 | 500 | 0.1601 |
| 0.0068 | 0.49 | 1000 | 0.2571 |
| 0.0016 | 0.74 | 1500 | 0.4330 |
| 0.0001 | 0.99 | 2000 | 0.1852 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
stokic/ppo-LunarLander-v2 | stokic | 2022-07-14T12:22:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-14T12:21:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 109.33 +/- 78.20
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Lvxue/finetuned-mt5-base-10epoch | Lvxue | 2022-07-14T12:21:17Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-12T03:18:31Z | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
model-index:
- name: finetuned-mt5-base-10epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mt5-base-10epoch
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vortixhead/distilbert-base-uncased-finetuned-emotion | vortixhead | 2022-07-14T12:00:08Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-02T16:55:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240758723346115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2140
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8278 | 1.0 | 250 | 0.3099 | 0.9055 | 0.9032 |
| 0.251 | 2.0 | 500 | 0.2140 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Siyong/MC | Siyong | 2022-07-14T10:48:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-14T08:44:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec-base-All
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec-base-All
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0545
- Wer: 0.8861
- Cer: 0.5014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:------:|:-----:|:---------------:|:------:|:------:|
| No log | 3.33 | 500 | 4.0654 | 1.0 | 0.9823 |
| No log | 6.67 | 1000 | 3.4532 | 1.0 | 0.9823 |
| No log | 10.0 | 1500 | 3.0707 | 0.9992 | 0.9781 |
| No log | 13.33 | 2000 | 2.7335 | 1.0017 | 0.9027 |
| No log | 16.67 | 2500 | 2.5896 | 1.0690 | 0.7302 |
| No log | 20.0 | 3000 | 2.3315 | 1.0690 | 0.6677 |
| No log | 23.33 | 3500 | 2.2217 | 1.0150 | 0.5966 |
| No log | 26.67 | 4000 | 2.3802 | 1.0549 | 0.5948 |
| No log | 30.0 | 4500 | 2.2208 | 0.9975 | 0.5681 |
| 2.4224 | 33.33 | 5000 | 2.2687 | 0.9800 | 0.5537 |
| 2.4224 | 36.67 | 5500 | 2.3169 | 0.9476 | 0.5493 |
| 2.4224 | 40.0 | 6000 | 2.5196 | 0.9900 | 0.5509 |
| 2.4224 | 43.33 | 6500 | 2.4816 | 0.9501 | 0.5272 |
| 2.4224 | 46.67 | 7000 | 2.4894 | 0.9485 | 0.5276 |
| 2.4224 | 50.0 | 7500 | 2.4555 | 0.9418 | 0.5305 |
| 2.4224 | 53.33 | 8000 | 2.7326 | 0.9559 | 0.5255 |
| 2.4224 | 56.67 | 8500 | 2.5514 | 0.9227 | 0.5209 |
| 2.4224 | 60.0 | 9000 | 2.9135 | 0.9717 | 0.5455 |
| 2.4224 | 63.33 | 9500 | 3.0465 | 0.8346 | 0.5002 |
| 0.8569 | 66.67 | 10000 | 2.8177 | 0.9302 | 0.5216 |
| 0.8569 | 70.0 | 10500 | 2.9908 | 0.9310 | 0.5128 |
| 0.8569 | 73.33 | 11000 | 3.1752 | 0.9235 | 0.5284 |
| 0.8569 | 76.67 | 11500 | 2.7412 | 0.8886 | 0.5 |
| 0.8569 | 80.0 | 12000 | 2.7362 | 0.9127 | 0.5040 |
| 0.8569 | 83.33 | 12500 | 2.9636 | 0.9152 | 0.5093 |
| 0.8569 | 86.67 | 13000 | 3.0139 | 0.9011 | 0.5097 |
| 0.8569 | 90.0 | 13500 | 2.8325 | 0.8853 | 0.5032 |
| 0.8569 | 93.33 | 14000 | 3.0383 | 0.8845 | 0.5056 |
| 0.8569 | 96.67 | 14500 | 2.7931 | 0.8795 | 0.4965 |
| 0.3881 | 100.0 | 15000 | 2.8972 | 0.8928 | 0.5012 |
| 0.3881 | 103.33 | 15500 | 2.7780 | 0.8736 | 0.4947 |
| 0.3881 | 106.67 | 16000 | 3.1081 | 0.9036 | 0.5109 |
| 0.3881 | 110.0 | 16500 | 3.0078 | 0.8928 | 0.5032 |
| 0.3881 | 113.33 | 17000 | 3.0245 | 0.8886 | 0.5009 |
| 0.3881 | 116.67 | 17500 | 3.0739 | 0.8928 | 0.5065 |
| 0.3881 | 120.0 | 18000 | 3.0545 | 0.8861 | 0.5014 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
google/tapas-mini-finetuned-wtq | google | 2022-07-14T10:14:00Z | 365 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- wikitablequestions
---
# TAPAS mini model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_mini_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_mini` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
SMALL | noreset | 0.3681 | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
SMALL | reset | 0.3762 | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
**MINI** | **noreset** | **0.2783** | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
**MINI** | **reset** | **0.2854** | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
google/tapas-small-finetuned-wtq | google | 2022-07-14T10:13:43Z | 291 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"tapas",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2004.02349",
"arxiv:2010.00571",
"arxiv:1508.00305",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- tapas
- table-question-answering
license: apache-2.0
datasets:
- wikitablequestions
---
# TAPAS small model fine-tuned on WikiTable Questions (WTQ)
This model has 2 versions which can be used. The default version corresponds to the `tapas_wtq_wikisql_sqa_inter_masklm_small_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas).
This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned in a chain on [SQA](https://www.microsoft.com/en-us/download/details.aspx?id=54253), [WikiSQL](https://github.com/salesforce/WikiSQL) and finally [WTQ](https://github.com/ppasupat/WikiTableQuestions). It uses relative position embeddings (i.e. resetting the position index at every cell of the table).
The other (non-default) version which can be used is:
- `no_reset`, which corresponds to `tapas_wtq_wikisql_sqa_inter_masklm_small` (intermediate pre-training, absolute position embeddings).
Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by
the Hugging Face team and contributors.
## Results
Size | Reset | Dev Accuracy | Link
-------- | --------| -------- | ----
LARGE | noreset | 0.5062 | [tapas-large-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/no_reset)
LARGE | reset | 0.5097 | [tapas-large-finetuned-wtq](https://huggingface.co/google/tapas-large-finetuned-wtq/tree/main)
BASE | noreset | 0.4525 | [tapas-base-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/no_reset)
BASE | reset | 0.4638 | [tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq/tree/main)
MEDIUM | noreset | 0.4324 | [tapas-medium-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/no_reset)
MEDIUM | reset | 0.4324 | [tapas-medium-finetuned-wtq](https://huggingface.co/google/tapas-medium-finetuned-wtq/tree/main)
**SMALL** | **noreset** | **0.3681** | [tapas-small-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/no_reset)
**SMALL** | **reset** | **0.3762** | [tapas-small-finetuned-wtq](https://huggingface.co/google/tapas-small-finetuned-wtq/tree/main)
MINI | noreset | 0.2783 | [tapas-mini-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/no_reset)
MINI | reset | 0.2854 | [tapas-mini-finetuned-wtq](https://huggingface.co/google/tapas-mini-finetuned-wtq/tree/main)
TINY | noreset | 0.0823 | [tapas-tiny-finetuned-wtq (with absolute pos embeddings)](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/no_reset)
TINY | reset | 0.1039 | [tapas-tiny-finetuned-wtq](https://huggingface.co/google/tapas-tiny-finetuned-wtq/tree/main)
## Model description
TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion.
This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it
can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in
the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words.
This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other,
or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional
representation of a table and associated text.
- Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating
a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence
is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements.
This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used
to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed
or refuted by the contents of a table. Fine-tuning is done by adding a cell selection head and aggregation head on top of the pre-trained model, and then jointly train these randomly initialized classification heads with the base model on SQa, WikiSQL and finally WTQ.
## Intended uses & limitations
You can use this model for answering questions related to a table.
For code examples, we refer to the documentation of TAPAS on the HuggingFace website.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Question [SEP] Flattened table [SEP]
```
The authors did first convert the WTQ dataset into the format of SQA using automatic conversion scripts.
### Fine-tuning
The model was fine-tuned on 32 Cloud TPU v3 cores for 50,000 steps with maximum sequence length 512 and batch size of 512.
In this setup, fine-tuning takes around 10 hours. The optimizer used is Adam with a learning rate of 1.93581e-5, and a warmup
ratio of 0.128960. An inductive bias is added such that the model only selects cells of the same column. This is reflected by the
`select_one_column` parameter of `TapasConfig`. See the [paper](https://arxiv.org/abs/2004.02349) for more details (tables 11 and
12).
### BibTeX entry and citation info
```bibtex
@misc{herzig2020tapas,
title={TAPAS: Weakly Supervised Table Parsing via Pre-training},
author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos},
year={2020},
eprint={2004.02349},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
```bibtex
@misc{eisenschlos2020understanding,
title={Understanding tables with intermediate pre-training},
author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller},
year={2020},
eprint={2010.00571},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@article{DBLP:journals/corr/PasupatL15,
author = {Panupong Pasupat and
Percy Liang},
title = {Compositional Semantic Parsing on Semi-Structured Tables},
journal = {CoRR},
volume = {abs/1508.00305},
year = {2015},
url = {http://arxiv.org/abs/1508.00305},
archivePrefix = {arXiv},
eprint = {1508.00305},
timestamp = {Mon, 13 Aug 2018 16:47:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/PasupatL15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
microsoft/tapex-large-finetuned-tabfact | microsoft | 2022-07-14T10:10:10Z | 136 | 8 | transformers | [
"transformers",
"pytorch",
"bart",
"text-classification",
"tapex",
"table-question-answering",
"en",
"dataset:tab_fact",
"arxiv:2107.07653",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | table-question-answering | 2022-03-02T23:29:05Z | ---
language: en
tags:
- tapex
- table-question-answering
datasets:
- tab_fact
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
This model is the `tapex-base` model fine-tuned on the [Tabfact](https://huggingface.co/datasets/tab_fact) dataset.
## Intended Uses
You can use the model for table fact verficiation.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForSequenceClassification
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
model = BartForSequenceClassification.from_pretrained("microsoft/tapex-large-finetuned-tabfact")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "beijing hosts the olympic games in 2012"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model(**encoding)
output_id = int(outputs.logits[0].argmax(dim=0))
print(model.config.id2label[output_id])
# Refused
```
### How to Eval
Please find the eval script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
ClassCat/roberta-base-spanish | ClassCat | 2022-07-14T09:38:05Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"es",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-25T20:07:43Z | ---
language: es
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: "Yo vivo en <mask>."
- text: "Quiero <mask> contigo ?"
- text: "Es clima es <mask>."
- text: "Me llamo <mask>."
- text: "Las negociaciones están <mask>."
---
## RoBERTa Spanish base model (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 50,000.
### Training Data
* [wiki40b/es](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bes) (Spanish Wikipedia)
* Subset of [CC-100/es](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-spanish')
unmasker("Yo soy <mask>.")
``` |
Kuro96/ppo-LunarLander-v2 | Kuro96 | 2022-07-14T09:20:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-14T09:20:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 222.42 +/- 18.29
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NinaXiao/distilroberta-base-wiki-mark | NinaXiao | 2022-07-14T09:05:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-14T08:42:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-wiki-mark
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wiki-mark
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2841 | 1.0 | 1265 | 2.0553 |
| 2.1536 | 2.0 | 2530 | 1.9840 |
| 2.1067 | 3.0 | 3795 | 1.9731 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
thannarot/hug-clip-bid | thannarot | 2022-07-14T08:07:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2022-07-13T14:59:12Z | ---
tags:
- generated_from_trainer
model-index:
- name: hug-clip-bid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hug-clip-bid
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0263 | 0.15 | 100 | 1.3193 |
| 0.9187 | 0.29 | 200 | 1.0286 |
| 0.7005 | 0.44 | 300 | 0.9560 |
| 0.5851 | 0.58 | 400 | 0.9433 |
| 0.6122 | 0.73 | 500 | 0.8936 |
| 0.5916 | 0.88 | 600 | 0.8276 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.11.6
|
vasugoel/K-12BERT | vasugoel | 2022-07-14T07:54:54Z | 42 | 9 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"education",
"K-12",
"en",
"dataset:vasugoel/K-12Corpus",
"arxiv:2205.12335",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-05T11:37:01Z | ---
language: en
tags:
- education
- K-12
license: apache-2.0
datasets:
- vasugoel/K-12Corpus
---
## K-12BERT model
K-12BERT is a model trained by performing continued pretraining on the K-12Corpus. Since, performance of BERT like models on domain adaptive tasks have shown great progress, we noticed the lack of such a model for the education domain (especially K-12 education). On that end we present K-12BERT, a BERT based model trained on our custom curated dataset, extracted from both open and proprietary education resources.
The model was trained using an MLM objective and in a continued pretraining fashion, due to the lack of resources available to train the model from ground up. This also, allowed us to save a lot of computational resources and utilize the existing knowledge of BERT. To that extent we also preserve the original vocabulary of BERT, to evaluate the performance under those conditions.
## Intended uses
We hope that the community especially researchers and professionals engaged in the education domain, are able to utilize this model to advance the domain of AI in education. With many fold usages for online education platforms, we hope we can contribute towards advancing education resources for the upcoming generation.
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel, AutoTokenizer, AutoModelForMaskedLM
tokenizer = BertTokenizer.from_pretrained('vasugoel/K-12BERT') # AutoTokenizer.from_pretrained('vasugoel/K-12BERT')
model = BertModel.from_pretrained("vasugoel/K-12BERT") # AutoModelForMaskedLM.from_pretrained('vasugoel/K-12BERT')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2205.12335,
doi = {10.48550/ARXIV.2205.12335},
url = {https://arxiv.org/abs/2205.12335},
author = {Goel, Vasu and Sahnan, Dhruv and V, Venktesh and Sharma, Gaurav and Dwivedi, Deep and Mohania, Mukesh},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {K-12BERT: BERT for K-12 education},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` |
shivaniNK8/mt5-small-finetuned-amazon-en-es | shivaniNK8 | 2022-07-14T06:39:22Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-07-14T05:17:52Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 22.6804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4413
- Rouge1: 22.6804
- Rouge2: 8.3299
- Rougel: 17.9992
- Rougelsum: 20.7342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.77 | 1.0 | 240 | 2.7230 | 17.25 | 5.629 | 14.0381 | 15.8959 |
| 3.7586 | 2.0 | 480 | 2.5949 | 19.4577 | 6.9354 | 15.772 | 17.8773 |
| 3.4314 | 3.0 | 720 | 2.5355 | 20.0511 | 7.6417 | 16.0889 | 18.4551 |
| 3.2892 | 4.0 | 960 | 2.4845 | 20.3951 | 7.88 | 16.601 | 19.0048 |
| 3.1954 | 5.0 | 1200 | 2.4612 | 20.1806 | 7.2656 | 16.2658 | 18.6222 |
| 3.1128 | 6.0 | 1440 | 2.4544 | 22.5647 | 8.0899 | 17.8057 | 20.487 |
| 3.103 | 7.0 | 1680 | 2.4498 | 22.7048 | 8.384 | 17.978 | 20.6871 |
| 3.0708 | 8.0 | 1920 | 2.4413 | 22.6804 | 8.3299 | 17.9992 | 20.7342 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
userGagan/segformer-b0-finetuned-segments-sidewalk-2 | userGagan | 2022-07-14T06:33:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2022-07-05T20:02:14Z | ---
license: apache-2.0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the userGagan/ResizedSample dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3429
- Mean Iou: 0.8143
- Mean Accuracy: 0.9007
- Overall Accuracy: 0.9061
- Per Category Iou: [0.8822819675417668, 0.7774253195321242, 0.7832033563111727]
- Per Category Accuracy: [0.9319684170082266, 0.8657193844491432, 0.9044945609610779]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------:|:------------------------------------------------------------:|
| 0.7949 | 0.5 | 20 | 0.8960 | 0.7129 | 0.8533 | 0.8427 | [0.7978191889735743, 0.6994730230171242, 0.6413103816527537] | [0.826874349660607, 0.8237981626592454, 0.9091007880329902] |
| 0.4881 | 1.0 | 40 | 0.6195 | 0.7364 | 0.8610 | 0.8552 | [0.8041892620489134, 0.6981663805103046, 0.7069887055480671] | [0.8308827565320059, 0.887905283397269, 0.8642919506720577] |
| 0.3115 | 1.5 | 60 | 0.4767 | 0.7352 | 0.8536 | 0.8588 | [0.8276338695141907, 0.7016825436162023, 0.6763414045904438] | [0.8633649830215921, 0.8776778472775076, 0.8196451790592317] |
| 0.5863 | 2.0 | 80 | 0.4895 | 0.7543 | 0.8748 | 0.8668 | [0.8156517914197925, 0.7259786638902507, 0.7213518497027839] | [0.8402281798360435, 0.8932153836673491, 0.8909222571543128] |
| 0.5182 | 2.5 | 100 | 0.4058 | 0.7904 | 0.8866 | 0.8919 | [0.860991170688589, 0.7583876635226005, 0.7518265397248736] | [0.9088903949664655, 0.8761789935147187, 0.8746304338865427] |
| 0.4755 | 3.0 | 120 | 0.3683 | 0.7896 | 0.8861 | 0.8895 | [0.8547537413009911, 0.7465075384127533, 0.7674680941571024] | [0.8979683913158062, 0.8865259395690547, 0.8738060532025316] |
| 0.6616 | 3.5 | 140 | 0.3697 | 0.7915 | 0.8874 | 0.8898 | [0.8551700094228354, 0.7431970428539307, 0.7761922571371438] | [0.8899387313627766, 0.903193218309171, 0.8690639906770039] |
| 0.5087 | 4.0 | 160 | 0.3367 | 0.8061 | 0.8987 | 0.8987 | [0.8640367246398447, 0.7643869962764198, 0.7899951558528526] | [0.9012200396208266, 0.8918889478830869, 0.902900133774502] |
| 0.5478 | 4.5 | 180 | 0.3297 | 0.8131 | 0.8991 | 0.9040 | [0.8775309087721331, 0.7692790103652185, 0.792538025793261] | [0.9196387801394476, 0.8895118205906903, 0.8882327151727265] |
| 0.389 | 5.0 | 200 | 0.3429 | 0.8143 | 0.9007 | 0.9061 | [0.8822819675417668, 0.7774253195321242, 0.7832033563111727] | [0.9319684170082266, 0.8657193844491432, 0.9044945609610779] |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rajat99/Fine_Tuning_XLSR_300M_testing_4_model | rajat99 | 2022-07-14T06:15:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-14T05:50:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Fine_Tuning_XLSR_300M_testing_4_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tuning_XLSR_300M_testing_4_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Hardik1313X/bert-finetuned-ner | Hardik1313X | 2022-07-14T04:36:44Z | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-14T04:19:47Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hardik1313X/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hardik1313X/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0279
- Validation Loss: 0.0571
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1745 | 0.0630 | 0 |
| 0.0468 | 0.0578 | 1 |
| 0.0279 | 0.0571 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
andrewzhang505/quad-swarm-rl-sf2 | andrewzhang505 | 2022-07-14T03:55:17Z | 2 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"region:us"
] | reinforcement-learning | 2022-07-14T03:55:10Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
---
A(n) **APPO** model trained on the **quadrotor_multi** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
amir36/distilbert-base-uncased-finetuned-emotion | amir36 | 2022-07-14T02:52:28Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-13T05:57:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.920970510317642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.921
- F1: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8133 | 1.0 | 250 | 0.3078 | 0.9095 | 0.9076 |
| 0.2431 | 2.0 | 500 | 0.2180 | 0.921 | 0.9210 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
kuttersn/gpt2-finetuned-redditComments | kuttersn | 2022-07-14T01:38:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-07T14:15:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-redditComments
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-redditComments
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.9535 | 1.0 | 4320 | 3.8888 |
| 3.8832 | 2.0 | 8640 | 3.8523 |
| 3.8708 | 3.0 | 12960 | 3.8418 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ClassCat/roberta-base-latin-v2 | ClassCat | 2022-07-14T00:20:13Z | 162 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"la",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-01T18:45:18Z | ---
language: la
license: cc-by-sa-4.0
datasets:
- cc100
widget:
- text: quod est tibi <mask> ?"
- text: vita brevis, ars <mask>.
- text: errare <mask> est.
- text: usus est magister <mask>.
---
## RoBERTa Latin base model Version 2 (Uncased)
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses RoBERTa base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with a vocabulary size 50,000.
### Training Data
* Subset of [CC-100/la](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='ClassCat/roberta-base-latin-v2')
unmasker("vita brevis, ars <mask>")
``` |
joaoalvarenga/bloom-8bit | joaoalvarenga | 2022-07-14T00:12:48Z | 26 | 75 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"arxiv:2106.09685",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-07-11T11:06:46Z | ---
inference: false
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
pipeline_tag: text-generation
---
### Quantized bigscience/bloom with 8-bit weights
Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom) a ~176 billion parameters language model that you run and fine-tune with less memory.
Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**.
Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster.
### How to fine-tune
In this [notebook](https://nbviewer.org/urls/huggingface.co/joaoalvarenga/bloom-8bit/raw/main/fine-tuning-example.ipynb) you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100 instance.
### How to use
This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb):
```python
import transformers
import torch
import torch.nn as nn
import torch.nn.functional as F
from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise
from typing import Tuple
from torch.cuda.amp import custom_fwd, custom_bwd
class FrozenBNBLinear(nn.Module):
def __init__(self, weight, absmax, code, bias=None):
assert isinstance(bias, nn.Parameter) or bias is None
super().__init__()
self.out_features, self.in_features = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
self.bias = bias
def forward(self, input):
output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear":
weights_int8, state = quantize_blockise_lowmemory(linear.weight)
return cls(weights_int8, *state, linear.bias)
def __repr__(self):
return f"{self.__class__.__name__}({self.in_features}, {self.out_features})"
class DequantizeAndLinear(torch.autograd.Function):
@staticmethod
@custom_fwd
def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor,
absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor):
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
ctx.save_for_backward(input, weights_quantized, absmax, code)
ctx._has_bias = bias is not None
return F.linear(input, weights_deq, bias)
@staticmethod
@custom_bwd
def backward(ctx, grad_output: torch.Tensor):
assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3]
input, weights_quantized, absmax, code = ctx.saved_tensors
# grad_output: [*batch, out_features]
weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code)
grad_input = grad_output @ weights_deq
grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None
return grad_input, None, None, None, grad_bias
class FrozenBNBEmbedding(nn.Module):
def __init__(self, weight, absmax, code):
super().__init__()
self.num_embeddings, self.embedding_dim = weight.shape
self.register_buffer("weight", weight.requires_grad_(False))
self.register_buffer("absmax", absmax.requires_grad_(False))
self.register_buffer("code", code.requires_grad_(False))
self.adapter = None
def forward(self, input, **kwargs):
with torch.no_grad():
# note: both quantuized weights and input indices are *not* differentiable
weight_deq = dequantize_blockwise(self.weight, absmax=self.absmax, code=self.code)
output = F.embedding(input, weight_deq, **kwargs)
if self.adapter:
output += self.adapter(input)
return output
@classmethod
def from_embedding(cls, embedding: nn.Embedding) -> "FrozenBNBEmbedding":
weights_int8, state = quantize_blockise_lowmemory(embedding.weight)
return cls(weights_int8, *state)
def __repr__(self):
return f"{self.__class__.__name__}({self.num_embeddings}, {self.embedding_dim})"
def quantize_blockise_lowmemory(matrix: torch.Tensor, chunk_size: int = 2 ** 20):
assert chunk_size % 4096 == 0
code = None
chunks = []
absmaxes = []
flat_tensor = matrix.view(-1)
for i in range((matrix.numel() - 1) // chunk_size + 1):
input_chunk = flat_tensor[i * chunk_size: (i + 1) * chunk_size].clone()
quantized_chunk, (absmax_chunk, code) = quantize_blockwise(input_chunk, code=code)
chunks.append(quantized_chunk)
absmaxes.append(absmax_chunk)
matrix_i8 = torch.cat(chunks).reshape_as(matrix)
absmax = torch.cat(absmaxes)
return matrix_i8, (absmax, code)
def convert_to_int8(model):
"""Convert linear and embedding modules to 8-bit with optional adapters"""
for module in list(model.modules()):
for name, child in module.named_children():
if isinstance(child, nn.Linear):
print(name, child)
setattr(
module,
name,
FrozenBNBLinear(
weight=torch.zeros(child.out_features, child.in_features, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
bias=child.bias,
),
)
elif isinstance(child, nn.Embedding):
setattr(
module,
name,
FrozenBNBEmbedding(
weight=torch.zeros(child.num_embeddings, child.embedding_dim, dtype=torch.uint8),
absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1),
code=torch.zeros(256),
)
)
class BloomBlock(transformers.models.bloom.modeling_bloom.BloomBlock):
def __init__(self, config, layer_number=None):
super().__init__(config, layer_number)
convert_to_int8(self.self_attention)
convert_to_int8(self.mlp)
class BloomModel(transformers.models.bloom.modeling_bloom.BloomModel):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
class BloomForCausalLM(transformers.models.bloom.modeling_bloom.BloomForCausalLM):
def __init__(self, config):
super().__init__(config)
convert_to_int8(self)
transformers.models.bloom.modeling_bloom.BloomBlock = BloomBlock
model = BloomForCausalLM.from_pretrained('joaoalvarenga/bloom-8bit', low_cpu_mem_usage=True)
tokenizer = BloomTokenizerFast.from_pretrained('joaoalvarenga/bloom-8bit')
prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt')
out = model.generate(**prompt, min_length=10, do_sample=True)
tokenizer.decode(out[0])
```
|
benjamin/gpt2-wechsel-malagasy | benjamin | 2022-07-13T23:45:23Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"mg",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-05T12:51:58Z | ---
language: mg
license: mit
---
# gpt2-wechsel-malagasy
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
| Model | PPL |
|---|---|
| `gpt2-wechsel-sundanese` | **111.72** |
| `gpt2` (retrained from scratch) | 149.46 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-scottish-gaelic` | **16.43** |
| `gpt2` (retrained from scratch) | 19.53 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-uyghur` | **34.33** |
| `gpt2` (retrained from scratch) | 42.82 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-malagasy` | **14.01** |
| `gpt2` (retrained from scratch) | 15.93 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
Subsets and Splits