modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
AntiSquid/Reinforce-model-666 | AntiSquid | 2022-07-12T21:52:02Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-12T21:51:51Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-model-666
results:
- metrics:
- type: mean_reward
value: 117.10 +/- 4.85
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Shaier/medqa_fine_tuned_generic_bert | Shaier | 2022-07-12T20:33:17Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-07-12T19:49:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: medqa_fine_tuned_generic_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medqa_fine_tuned_generic_bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4239
- Accuracy: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.3851 | 0.2594 |
| 1.3896 | 2.0 | 636 | 1.3805 | 0.2807 |
| 1.3896 | 3.0 | 954 | 1.3852 | 0.2948 |
| 1.3629 | 4.0 | 1272 | 1.3996 | 0.2980 |
| 1.3068 | 5.0 | 1590 | 1.4239 | 0.2869 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
MichalRoztocki/finetuning-sentiment-model-3000-samples | MichalRoztocki | 2022-07-12T19:48:42Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-12T19:35:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3085
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ilmariky/bert-base-finnish-cased-squad1-fi | ilmariky | 2022-07-12T19:09:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"fi",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-12T17:01:30Z | ---
language: fi
datasets:
- SQuAD_v2_fi + Finnish partition of TyDi-QA
license: gpl-3.0
---
# bert-base-finnish-cased-v1 for QA
This is the [bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) model, fine-tuned using an automatically translated [Finnish version of the SQuAD2.0 dataset](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) in combination with the Finnish partition of the [TyDi-QA](https://github.com/google-research-datasets/tydiqa) dataset. It's been trained on question-answer pairs, **excluding unanswerable questions**, for the task of question answering.
Another QA model that has been fine-tuned with also unanswerable questions is also available: [bert-base-finnish-cased-squad2-fi](https://huggingface.co/ilmariky/bert-base-finnish-cased-squad1-fi).
## Overview
**Language model:** bert-base-finnish-cased-v1
**Language:** Finnish
**Downstream-task:** Extractive QA
**Training data:** Answerable questions from [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA
**Eval data:** Answerable questions from [Finnish SQuAD 2.0](https://huggingface.co/datasets/ilmariky/SQuAD_v2_fi) + Finnish partition of TyDi-QA
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "ilmariky/bert-base-finnish-cased-squad1-fi"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Mikä tämä on?',
'context': 'Tämä on testi.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated with a slightly modified version of the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
{
"exact": 58.00497718788884,
"f1": 69.90891092523077,
"total": 4822,
"HasAns_exact": 58.00497718788884,
"HasAns_f1": 69.90891092523077,
"HasAns_total": 4822
}
```
|
zluvolyote/s288cExpressionPrediction_k6 | zluvolyote | 2022-07-12T16:54:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-12T16:02:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: s288cExpressionPrediction_k6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s288cExpressionPrediction_k6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4418
- Accuracy: 0.8067
- F1: 0.7882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 58 | 0.5315 | 0.7278 | 0.7572 |
| No log | 2.0 | 116 | 0.4604 | 0.7853 | 0.7841 |
| No log | 3.0 | 174 | 0.4418 | 0.8067 | 0.7882 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
reachrkr/TEST2ppo-LunarLander-v2 | reachrkr | 2022-07-12T16:20:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-12T16:20:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 266.96 +/- 25.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fxmarty/20220712-h16m02s58_example_beans | fxmarty | 2022-07-12T16:03:03Z | 0 | 0 | null | [
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"region:us"
] | image-classification | 2022-07-12T16:02:58Z | ---
pipeline_tag: image-classification
datasets:
- beans
metrics:
- accuracy
tags:
- vit
---
**task**: `image-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `nateraw/vit-base-beans`
* **dataset**:
* **path**: `beans`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'image'}`
* **ref_keys**: `['labels']`
* **calibration_split**: `train`
* **quantization_approach**: `dynamic`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **operators_to_quantize**: `['Add']`, `['Add', 'MatMul']`
* **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']`
* **per_channel**: `False`, `True`
# Evaluation
## Non-time metrics
| operators_to_quantize | node_exclusion | per_channel | | accuracy (original) | accuracy (optimized) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-----------------: | :------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 0.980 | 0.980 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 0.980 | 0.980 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 0.980 | 0.980 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 0.980 | 0.980 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 0.980 | 0.980 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 0.980 | 0.980 |
| `['Add']` | `[]` | `False` | \| | 0.980 | 0.980 |
| `['Add']` | `[]` | `True` | \| | 0.980 | 0.980 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 1, input length = 32.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 200.50 | 63.00 | \| | 5.00 | 15.93 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 198.19 | 72.65 | \| | 5.07 | 13.80 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 191.44 | 63.27 | \| | 5.27 | 15.87 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 154.84 | 72.51 | \| | 6.47 | 13.80 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 155.84 | 130.95 | \| | 6.47 | 7.67 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 201.76 | 131.25 | \| | 5.00 | 7.67 |
| `['Add']` | `[]` | `False` | \| | 198.96 | 128.82 | \| | 5.07 | 7.80 |
| `['Add']` | `[]` | `True` | \| | 163.76 | 129.62 | \| | 6.13 | 7.73 |
Below, time metrics for batch size = 1, input length = 64.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 162.75 | 67.18 | \| | 6.20 | 14.93 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 159.69 | 72.77 | \| | 6.33 | 13.80 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 183.10 | 64.02 | \| | 5.47 | 15.67 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 157.21 | 64.16 | \| | 6.40 | 15.60 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 155.32 | 130.74 | \| | 6.47 | 7.67 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 198.56 | 162.51 | \| | 5.07 | 6.20 |
| `['Add']` | `[]` | `False` | \| | 186.58 | 163.38 | \| | 5.40 | 6.13 |
| `['Add']` | `[]` | `True` | \| | 199.75 | 131.46 | \| | 5.07 | 7.67 |
Below, time metrics for batch size = 1, input length = 128.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 160.58 | 67.65 | \| | 6.27 | 14.80 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 158.60 | 72.53 | \| | 6.33 | 13.80 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 200.46 | 62.95 | \| | 5.00 | 15.93 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 195.39 | 72.28 | \| | 5.13 | 13.87 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 197.59 | 128.80 | \| | 5.07 | 7.80 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 156.24 | 162.63 | \| | 6.47 | 6.20 |
| `['Add']` | `[]` | `False` | \| | 157.25 | 129.13 | \| | 6.40 | 7.80 |
| `['Add']` | `[]` | `True` | \| | 176.08 | 161.79 | \| | 5.73 | 6.20 |
Below, time metrics for batch size = 4, input length = 32.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 503.83 | 219.62 | \| | 2.00 | 4.60 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 603.26 | 266.15 | \| | 1.67 | 3.80 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 654.79 | 217.45 | \| | 1.53 | 4.60 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 654.33 | 219.54 | \| | 1.53 | 4.60 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 654.20 | 481.61 | \| | 1.53 | 2.13 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 609.81 | 632.73 | \| | 1.67 | 1.60 |
| `['Add']` | `[]` | `False` | \| | 588.86 | 602.91 | \| | 1.73 | 1.67 |
| `['Add']` | `[]` | `True` | \| | 666.98 | 655.32 | \| | 1.53 | 1.53 |
Below, time metrics for batch size = 4, input length = 64.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 656.87 | 216.32 | \| | 1.53 | 4.67 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 507.24 | 265.62 | \| | 2.00 | 3.80 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 655.36 | 219.61 | \| | 1.53 | 4.60 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 613.28 | 220.96 | \| | 1.67 | 4.53 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 656.30 | 652.72 | \| | 1.53 | 1.53 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 521.09 | 472.90 | \| | 1.93 | 2.13 |
| `['Add']` | `[]` | `False` | \| | 655.37 | 473.77 | \| | 1.53 | 2.13 |
| `['Add']` | `[]` | `True` | \| | 653.62 | 468.82 | \| | 1.53 | 2.13 |
Below, time metrics for batch size = 4, input length = 128.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 654.24 | 216.82 | \| | 1.53 | 4.67 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 657.16 | 240.11 | \| | 1.53 | 4.20 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 504.14 | 217.47 | \| | 2.00 | 4.60 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 655.94 | 220.12 | \| | 1.53 | 4.60 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 653.99 | 479.06 | \| | 1.53 | 2.13 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 642.48 | 666.28 | \| | 1.60 | 1.53 |
| `['Add']` | `[]` | `False` | \| | 656.34 | 661.24 | \| | 1.53 | 1.53 |
| `['Add']` | `[]` | `True` | \| | 661.86 | 472.49 | \| | 1.53 | 2.13 |
Below, time metrics for batch size = 8, input length = 32.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1294.07 | 472.54 | \| | 0.80 | 2.13 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1287.58 | 542.72 | \| | 0.80 | 1.87 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 1033.37 | 433.32 | \| | 1.00 | 2.33 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 1030.14 | 542.36 | \| | 1.00 | 1.87 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 953.27 | 926.14 | \| | 1.07 | 1.13 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1173.01 | 995.22 | \| | 0.87 | 1.07 |
| `['Add']` | `[]` | `False` | \| | 1280.07 | 926.97 | \| | 0.80 | 1.13 |
| `['Add']` | `[]` | `True` | \| | 1283.70 | 927.87 | \| | 0.80 | 1.13 |
Below, time metrics for batch size = 8, input length = 64.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1273.61 | 435.27 | \| | 0.80 | 2.33 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1157.00 | 542.75 | \| | 0.87 | 1.87 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 968.85 | 537.65 | \| | 1.07 | 1.87 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 1107.66 | 472.53 | \| | 0.93 | 2.13 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1270.30 | 1092.10 | \| | 0.80 | 0.93 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1263.29 | 1012.66 | \| | 0.80 | 1.00 |
| `['Add']` | `[]` | `False` | \| | 1007.19 | 1331.12 | \| | 1.07 | 0.80 |
| `['Add']` | `[]` | `True` | \| | 1286.51 | 1317.96 | \| | 0.80 | 0.80 |
Below, time metrics for batch size = 8, input length = 128.
| operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1188.98 | 537.58 | \| | 0.87 | 1.87 |
| `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 951.31 | 489.40 | \| | 1.07 | 2.07 |
| `['Add', 'MatMul']` | `[]` | `False` | \| | 1278.73 | 537.52 | \| | 0.80 | 1.87 |
| `['Add', 'MatMul']` | `[]` | `True` | \| | 1005.38 | 440.01 | \| | 1.07 | 2.33 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1265.55 | 1304.51 | \| | 0.80 | 0.80 |
| `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1186.54 | 934.09 | \| | 0.87 | 1.13 |
| `['Add']` | `[]` | `False` | \| | 1276.38 | 1319.84 | \| | 0.80 | 0.80 |
| `['Add']` | `[]` | `True` | \| | 981.81 | 940.69 | \| | 1.07 | 1.07 |
|
MarLac/wav2vec2-base-timit-demo-google-colab | MarLac | 2022-07-12T15:41:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-12T08:24:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5816
- Wer: 0.3533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.243 | 0.5 | 500 | 1.0798 | 0.7752 |
| 0.834 | 1.01 | 1000 | 0.6206 | 0.5955 |
| 0.5503 | 1.51 | 1500 | 0.5387 | 0.5155 |
| 0.4548 | 2.01 | 2000 | 0.4660 | 0.4763 |
| 0.3412 | 2.51 | 2500 | 0.8381 | 0.4836 |
| 0.3128 | 3.02 | 3000 | 0.4818 | 0.4519 |
| 0.2547 | 3.52 | 3500 | 0.4415 | 0.4230 |
| 0.2529 | 4.02 | 4000 | 0.4624 | 0.4219 |
| 0.2103 | 4.52 | 4500 | 0.4714 | 0.4096 |
| 0.2102 | 5.03 | 5000 | 0.4968 | 0.4087 |
| 0.1838 | 5.53 | 5500 | 0.4643 | 0.4131 |
| 0.1721 | 6.03 | 6000 | 0.4676 | 0.3979 |
| 0.1548 | 6.53 | 6500 | 0.4765 | 0.4085 |
| 0.1595 | 7.04 | 7000 | 0.4797 | 0.3941 |
| 0.1399 | 7.54 | 7500 | 0.4753 | 0.3902 |
| 0.1368 | 8.04 | 8000 | 0.4697 | 0.3945 |
| 0.1276 | 8.54 | 8500 | 0.5438 | 0.3869 |
| 0.1255 | 9.05 | 9000 | 0.5660 | 0.3841 |
| 0.1077 | 9.55 | 9500 | 0.4964 | 0.3947 |
| 0.1197 | 10.05 | 10000 | 0.5349 | 0.3849 |
| 0.1014 | 10.55 | 10500 | 0.5558 | 0.3883 |
| 0.0949 | 11.06 | 11000 | 0.5673 | 0.3785 |
| 0.0882 | 11.56 | 11500 | 0.5589 | 0.3955 |
| 0.0906 | 12.06 | 12000 | 0.5752 | 0.4120 |
| 0.1064 | 12.56 | 12500 | 0.5080 | 0.3727 |
| 0.0854 | 13.07 | 13000 | 0.5398 | 0.3798 |
| 0.0754 | 13.57 | 13500 | 0.5237 | 0.3816 |
| 0.0791 | 14.07 | 14000 | 0.4967 | 0.3725 |
| 0.0731 | 14.57 | 14500 | 0.5287 | 0.3744 |
| 0.0719 | 15.08 | 15000 | 0.5633 | 0.3596 |
| 0.062 | 15.58 | 15500 | 0.5399 | 0.3752 |
| 0.0681 | 16.08 | 16000 | 0.5151 | 0.3759 |
| 0.0559 | 16.58 | 16500 | 0.5564 | 0.3709 |
| 0.0533 | 17.09 | 17000 | 0.5933 | 0.3743 |
| 0.0563 | 17.59 | 17500 | 0.5381 | 0.3670 |
| 0.0527 | 18.09 | 18000 | 0.5685 | 0.3731 |
| 0.0492 | 18.59 | 18500 | 0.5728 | 0.3725 |
| 0.0509 | 19.1 | 19000 | 0.6074 | 0.3807 |
| 0.0436 | 19.6 | 19500 | 0.5762 | 0.3628 |
| 0.0434 | 20.1 | 20000 | 0.6721 | 0.3729 |
| 0.0416 | 20.6 | 20500 | 0.5842 | 0.3700 |
| 0.0431 | 21.11 | 21000 | 0.5374 | 0.3607 |
| 0.037 | 21.61 | 21500 | 0.5556 | 0.3667 |
| 0.036 | 22.11 | 22000 | 0.5608 | 0.3592 |
| 0.04 | 22.61 | 22500 | 0.5272 | 0.3637 |
| 0.047 | 23.12 | 23000 | 0.5234 | 0.3625 |
| 0.0506 | 23.62 | 23500 | 0.5427 | 0.3629 |
| 0.0418 | 24.12 | 24000 | 0.5590 | 0.3626 |
| 0.037 | 24.62 | 24500 | 0.5615 | 0.3555 |
| 0.0429 | 25.13 | 25000 | 0.5806 | 0.3616 |
| 0.045 | 25.63 | 25500 | 0.5777 | 0.3639 |
| 0.0283 | 26.13 | 26000 | 0.5987 | 0.3617 |
| 0.0253 | 26.63 | 26500 | 0.5671 | 0.3551 |
| 0.032 | 27.14 | 27000 | 0.5464 | 0.3582 |
| 0.0321 | 27.64 | 27500 | 0.5634 | 0.3573 |
| 0.0274 | 28.14 | 28000 | 0.5513 | 0.3575 |
| 0.0245 | 28.64 | 28500 | 0.5745 | 0.3537 |
| 0.0251 | 29.15 | 29000 | 0.5759 | 0.3547 |
| 0.0222 | 29.65 | 29500 | 0.5816 | 0.3533 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
andreaschandra/xlm-roberta-base-finetuned-panx-it | andreaschandra | 2022-07-12T15:34:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-12T15:30:49Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8288879770209273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2380
- F1: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7058 | 1.0 | 70 | 0.3183 | 0.7480 |
| 0.2808 | 2.0 | 140 | 0.2647 | 0.8070 |
| 0.1865 | 3.0 | 210 | 0.2380 | 0.8289 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
andy-0v0/fancy-animales | andy-0v0 | 2022-07-12T15:30:18Z | 54 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-07-07T22:16:04Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fancy-animales
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9464285969734192
---
# fancy-animales
Just for fun and to test the template!
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### chow chow

#### panda

#### penguin

#### sloth

#### wombat
 |
zluvolyote/CUBERT | zluvolyote | 2022-07-12T15:09:51Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-15T18:09:44Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: CUBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CUBERT
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 58 | 5.5281 |
| No log | 2.0 | 116 | 5.2508 |
| No log | 3.0 | 174 | 5.2203 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.1
- Tokenizers 0.12.1
|
andreaschandra/xlm-roberta-base-finetuned-panx-de-fr | andreaschandra | 2022-07-12T15:05:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-12T14:49:14Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1619
- F1: 0.8599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2851 | 1.0 | 715 | 0.1792 | 0.8239 |
| 0.149 | 2.0 | 1430 | 0.1675 | 0.8401 |
| 0.0955 | 3.0 | 2145 | 0.1619 | 0.8599 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Kuro96/q-FrozenLake-v1-4x4-noSlippery | Kuro96 | 2022-07-12T14:35:27Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-12T14:35:21Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Kuro96/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
huggingtweets/piotrikonowicz1 | huggingtweets | 2022-07-12T14:00:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-12T14:00:22Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/770622589664460802/bgUHfTNZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Piotr Ikonowicz</div>
<div style="text-align: center; font-size: 14px;">@piotrikonowicz1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Piotr Ikonowicz.
| Data | Piotr Ikonowicz |
| --- | --- |
| Tweets downloaded | 133 |
| Retweets | 3 |
| Short tweets | 13 |
| Tweets kept | 117 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/156jwrd1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @piotrikonowicz1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w029u281) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w029u281/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/piotrikonowicz1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
workRL/TEST2ppo-CarRacing-v0 | workRL | 2022-07-12T13:31:15Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"CarRacing-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-12T13:29:34Z | ---
library_name: stable-baselines3
tags:
- CarRacing-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -69.53 +/- 1.56
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v0
type: CarRacing-v0
---
# **PPO** Agent playing **CarRacing-v0**
This is a trained model of a **PPO** agent playing **CarRacing-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hugginglearners/rice_image_classification | hugginglearners | 2022-07-12T13:27:14Z | 0 | 0 | fastai | [
"fastai",
"image-classification",
"region:us"
] | image-classification | 2022-07-09T06:03:15Z | ---
tags:
- fastai
- image-classification
---
## Model description
This repo contains the trained model for rice image classification
Full credits go to [Vu Minh Chien](https://www.linkedin.com/in/vumichien/)
Motivation: Rice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These usually feature such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds.
## Intended uses & limitations
In this repo, Arborio, Basmati, Ipsala, Jasmine, and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000-grain images, 15,000 from each of these varieties, are included in the dataset.
## Training and evaluation data
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 3e-4 |
| freeze_epochs| 3 |
| unfreeze_epochs| 10|
| training_precision | float16 |
|
ymcnabb/finetuning-sentiment-model | ymcnabb | 2022-07-12T13:17:58Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-12T12:24:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8758169934640523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3291
- Accuracy: 0.8733
- F1: 0.8758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
xuantsh/distilroberta-base-Mark_example | xuantsh | 2022-07-12T13:13:45Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-12T12:57:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-Mark_example
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-Mark_example
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8299 | 1.0 | 744 | 2.6322 |
| 2.7034 | 2.0 | 1488 | 2.6514 |
| 2.5616 | 3.0 | 2232 | 2.6596 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Nonzerophilip/bert-finetuned-ner_swedish_small_set_health_and_standart | Nonzerophilip | 2022-07-12T12:42:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-19T09:36:49Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner_swedish_small_set_health_and_standart
results: []
---
# Named Entity Recognition model for swedish
This model is a fine-tuned version of [KBLab/bert-base-swedish-cased-ner](https://huggingface.co/KBLab/bert-base-swedish-cased-ner)for only Swedish. It has been fine-tuned on the concatenation of a smaller version of SUC 3.0 and some medical text from the Swedish website 1177.
The model will predict the following entities:
| Tag | Name | Exampel |
|:-------------:|:-----:|:----:|
| PER |Person | (e.g., Johan and Sofia) |
| LOC | Location | (e.g., Göteborg and Spanien) |
| ORG | Organisation | (e.g., Volvo and Skatteverket) \ |
| PHARMA_DRUGS | Medication | (e.g., Paracetamol and Omeprazol)|
| HEALTH | Illness/Diseases | (e.g., Cancer, sjuk and diabetes) |
| Relation | Family members | (e.g., Mamma and Farmor) |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_swedish_small_set_health_and_standart
It achieves the following results on the evaluation set:
- Loss: 0.0963
- Precision: 0.7548
- Recall: 0.7811
- F1: 0.7677
- Accuracy: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 219 | 0.1123 | 0.7674 | 0.6567 | 0.7078 | 0.9681 |
| No log | 2.0 | 438 | 0.0934 | 0.7643 | 0.7662 | 0.7652 | 0.9738 |
| 0.1382 | 3.0 | 657 | 0.0963 | 0.7548 | 0.7811 | 0.7677 | 0.9756 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.7.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
rajat99/Fine_Tuning_XLSR_300M_testing_model | rajat99 | 2022-07-12T12:00:41Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-12T10:26:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Fine_Tuning_XLSR_300M_testing_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tuning_XLSR_300M_testing_model
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2861
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.5178 | 23.53 | 400 | 3.2861 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
cffl/bart-base-styletransfer-subjective-to-neutral | cffl | 2022-07-12T11:58:08Z | 286 | 3 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1911.09709",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-07-01T18:41:46Z | ---
license: apache-2.0
---
# bart-base-styletransfer-subjective-to-neutral
## Model description
This [facebook/bart-base](https://huggingface.co/facebook/bart-base) model has been fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://arxiv.org/pdf/1911.09709.pdf) - a parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The model can be used to transfer style in text from subjectively biased to neutrally toned.
The development and modeling efforts that produced this model are documented in detail through [this blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html).
## Intended uses & limitations
The model is intended purely as a research output for NLP and data science communities. We imagine this model will be used by researchers to better understand the limitations, robustness, and generalization of text style transfer models. Ultimately, we hope this model will inspire future work on text style transfer and serve as a benchmarking tool for the style attribute of subjectivity bias, specifically.
Any production use of this model - whether commercial or not - is currently not intended. This is because, as [the team at OpenAI points out](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases), large langauge models like BART reflect biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans, unless the deployers first carry out a study of biases relevant to the intended use-case. Neither the model nor the WNC dataset has been sufficiently evaluated for performance and bias. Our efforts quantified model performance using two custom evaluation metrics, neither of which have been correlated to human evaluation for the task.
As we discuss in the blog series, since the WNC is a parallel dataset and we formulate the learning task as a supervised problem, the model indirectly adopts Wikipedia's NPOV policy as the definition for "neutrality" and "subjectivity". The NPOV policy may not fully reflect an end users assumed/intended meaning of subjectivity because the notion of subjectivity itself can be...well, subjective.
We discovered through our exploratory work that the WNC does contain data quality issues that will contribute to unintended bias in the model. For example, some NPOV revisions introduce factual information outside the context of the prompt as a means to correct bias. We believe these factual based edits are out of scope for a subjective-to-neutral style transfer modeling task, but exist here nonetheless.
## How to use
This model can be used directly with a HuggingFace pipeline for `text2text-generation`.
```python
>>> from transformers import pipeline
>>> styletransfer = pipeline(
task="text2text-generation",
model="cffl/bart-base-styletransfer-subjective-to-neutral",
max_length=200,
)
>>> input_text = "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information."
>>> styletransfer(input_text)
[{'generated_text': 'chemical abstracts service (cas), a division of the american chemical society, is a source of chemical information.'}]
```
## Training procedure
For modeling, we made extensive use of the Huggingface transformers library by initializing the [BartForConditionalGeneration](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration) model with [facebook/bart-base](https://huggingface.co/facebook/bart-base) pretrained weights and adapting the [summarization fine-tuning script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) for our TST-specific needs. We fine-tune the model for 15 epochs on an NVIDIA Tesla V100 GPU with a batch size of 32. (Note that when fine-tuning the model with the parallel examples, the noising function is turned off so an uncorrupted document is passed to BART's encoder and decoder.)
Please refer to [our blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html) for a discussion of evaluation metrics and results.
|
cffl/bert-base-styleclassification-subjective-neutral | cffl | 2022-07-12T11:57:42Z | 2,297 | 8 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1911.09709",
"arxiv:1703.01365",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-01T19:35:53Z | ---
license: apache-2.0
---
# bert-base-styleclassification-subjective-neutral
## Model description
This [bert-base-uncased](https://huggingface.co/bert-base-uncased) model has been fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://arxiv.org/pdf/1911.09709.pdf) - a parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The model can be used to classify text as subjectively biased vs. neutrally toned.
The development and modeling efforts that produced this model are documented in detail through [this blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html).
## Intended uses & limitations
The model is intended purely as a research output for NLP and data science communities. We developed this model for the purpose of evaluating text style transfer output. Specifically, we derive a Style Transfer Intensity (STI) metric from the classifier's output distributions. We also extract feautre importances from the model via [Integrated Gradients](https://arxiv.org/pdf/1703.01365.pdf) with support a Content Preservation Score (CPS).
We imagine this model will be used by researchers to better understand the limitations, robustness, and generalization of text style transfer models. Ultimately, we hope this model will inspire future work on text style transfer and serve as a benchmarking tool for the style attribute of subjectivity bias, specifically.
Any production use of this model - whether commercial or not - is currently not intended. This is because, as [the team at OpenAI points out](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases), large langauge models like BERT reflect biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans, unless the deployers first carry out a study of biases relevant to the intended use-case. Neither the model nor the WNC dataset has been sufficiently evaluated for performance and bias.
As we discuss in the blog series, since the WNC is a parallel dataset and we formulate the learning task as a supervised problem, the model indirectly adopts Wikipedia's NPOV policy as the definition for "neutrality" and "subjectivity". The NPOV policy may not fully reflect an end users assumed/intended meaning of subjectivity because the notion of subjectivity itself can be...well, subjective.
We discovered through our exploratory work that the WNC does contain data quality issues that will contribute to unintended bias in the model. For example, some NPOV revisions introduce factual information outside the context of the prompt as a means to correct bias. We believe these factual based edits are out of scope for a subjective-to-neutral style transfer modeling task, but exist here nonetheless.
## How to use
This model can be used directly with a HuggingFace pipeline for `text2text-generation`.
```python
>>> from transformers import pipeline
>>> classify = pipeline(
task="text-classification",
model="cffl/bert-base-styleclassification-subjective-neutral",
return_all_scores=True,
)
>>> input_text = "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information."
>>> classify(input_text)
[[{'label': 'SUBJECTIVE', 'score': 0.9765084385871887},
{'label': 'NEUTRAL', 'score': 0.023491567000746727}]]
```
## Training procedure
For training, we initialize HuggingFace’s [AutoModelforSequenceClassification](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSequenceClassification) with [bert-base-uncased](https://huggingface.co/bert-base-uncased) pre-trained weights and perform a hyperparameter search over: batch size [16, 32], learning rate [3e-05, 3e-06, 3e-07], weight decay [0, 0.01, 0.1] and batch shuffling [True, False] while training for 15 epochs.
We monitor performance using accuracy as we have a perfectly balanced dataset and assign equal cost to false positives and false negatives. The best performing model produces an overall accuracy of 72.50% -- please reference our [training script](https://github.com/fastforwardlabs/text-style-transfer/blob/main/scripts/train/classifier/train_classifier.py) and [classifier evaluation notebook](https://github.com/fastforwardlabs/text-style-transfer/blob/main/notebooks/WNC_full_style_classifier_evaluation.ipynb) for further details.
|
Vikasbhandari/wav2vec2-train | Vikasbhandari | 2022-07-12T11:51:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.11430",
"arxiv:2006.11477",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-12T11:11:37Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-large-960h-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.9
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.9
---
# Wav2Vec2-Large-960h-Lv60 + Self-Training
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.9 | 3.9 | |
dungeoun/pos_neg_neu_tweet_BERT | dungeoun | 2022-07-12T11:08:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-07-12T06:22:25Z |
---
license: apache-2.0
pipeline-tag: text-classification
---
This repository contains a fine-tuned BERT model trained on tweets of categories Positive, Negative, and Neutral sentiments.
|
MiguelCosta/finetuning-sentiment-model-24000-samples | MiguelCosta | 2022-07-12T10:48:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-12T06:17:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-24000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
- name: F1
type: f1
value: 0.9273927392739274
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-24000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3505
- Accuracy: 0.9267
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained | nawta | 2022-07-12T10:20:53Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-12T05:31:38Z | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained
This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2963
- Cer: 0.9002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.3287 | 23.81 | 500 | 2.2963 | 0.9002 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
luke-thorburn/suggest-conclusion-bias-only | luke-thorburn | 2022-07-12T10:08:32Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Consider the facts:
* [premise 1]
* [premise 2]
...
* [premise n]
We must conclude that: [generated conclusion]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-reasons-bias-only | luke-thorburn | 2022-07-12T10:07:19Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate reasons that support a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List reasons why: [original claim]
Reasons:
* [reason 1]
* [reason 2]
...
* [reason n]
* [generated reason]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-intermediary-claims-bias-only | luke-thorburn | 2022-07-12T10:06:29Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate a chain of reasoning from one claim to another
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Input: [start claim] -> [end claim]
Output: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-reasons-full-finetune | luke-thorburn | 2022-07-12T10:04:57Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate reasons that support a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List reasons why: [original claim]
Reasons:
* [reason 1]
* [reason 2]
...
* [reason n]
* [generated reason]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-conclusion-full-finetune | luke-thorburn | 2022-07-12T10:02:48Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Consider the facts:
* [premise 1]
* [premise 2]
...
* [premise n]
We must conclude that: [generated conclusion]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-intermediary-claims-full-finetune | luke-thorburn | 2022-07-12T09:56:47Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate a chain of reasoning from one claim to another
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
Input: [start claim] -> [end claim]
Output: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-objections-full-finetune | luke-thorburn | 2022-07-12T09:54:28Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate objections to a claim
This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
List objections to the claim that: [original claim]
Objections:
* [objection 1]
* [objection 2]
...
* [objection n]
* [generated objection]
```
# Dataset
The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-reasons-soft | luke-thorburn | 2022-07-12T09:45:30Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate reasons that support a claim
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt][original claim]
Pros:
- [reason 1]
- [reason 2]
...
- [reason n]
- [generated reason]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-conclusion-soft | luke-thorburn | 2022-07-12T09:43:47Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate the conclusion of an argument
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the conclusion of an argument given its premises. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt]- [premise 1]
- [premise 2]
...
- [premise n]
Conclusion: [generated conclusion]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
luke-thorburn/suggest-objections-soft | luke-thorburn | 2022-07-12T09:43:28Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"argumentation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- argumentation
license: apache-2.0
metrics:
- perplexity
---
# Generate objections to a claim
This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks.
Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review.
# Prompt Template
```
[prepended soft prompt][original claim]
Cons:
- [objection 1]
- [objection 2]
...
- [objection n]
- [generated objection]
```
# Dataset
The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/).
# Limitations and Biases
The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon.
# Acknowledgements
This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia. |
fxmarty/20220712-h08m05s32_ | fxmarty | 2022-07-12T08:05:37Z | 0 | 0 | null | [
"tensorboard",
"vit",
"image-classification",
"dataset:beans",
"region:us"
] | image-classification | 2022-07-12T08:05:32Z | ---
pipeline_tag: image-classification
datasets:
- beans
metrics:
- accuracy
tags:
- vit
---
**task**: `image-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `nateraw/vit-base-beans`
* **dataset**:
* **path**: `beans`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'image'}`
* **ref_keys**: `['labels']`
* **quantization_approach**: `dynamic`
* **node_exclusion**: `[]`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']`, `[]`
* **per_channel**: `False`, `True`
# Evaluation
## Non-time metrics
| operators_to_quantize | per_channel | | accuracy (original) | accuracy (optimized) |
| :-------------------: | :---------: | :-: | :-----------------: | :------------------: |
| `['Add', 'MatMul']` | `False` | \| | 0.980 | 0.980 |
| `['Add', 'MatMul']` | `True` | \| | 0.980 | 0.980 |
| `['Add']` | `False` | \| | 0.980 | 0.980 |
| `['Add']` | `True` | \| | 0.980 | 0.980 |
| `[]` | `False` | \| | 0.980 | 0.980 |
| `[]` | `True` | \| | 0.980 | 0.980 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 1, input length = 32.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 201.25 | 70.30 | \| | 5.00 | 14.27 |
| `['Add', 'MatMul']` | `True` | \| | 203.52 | 72.48 | \| | 4.93 | 13.80 |
| `['Add']` | `False` | \| | 166.03 | 150.93 | \| | 6.07 | 6.67 |
| `['Add']` | `True` | \| | 200.82 | 163.17 | \| | 5.00 | 6.13 |
| `[]` | `False` | \| | 190.99 | 162.06 | \| | 5.27 | 6.20 |
| `[]` | `True` | \| | 155.15 | 162.52 | \| | 6.47 | 6.20 |
Below, time metrics for batch size = 1, input length = 64.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 165.85 | 70.60 | \| | 6.07 | 14.20 |
| `['Add', 'MatMul']` | `True` | \| | 161.41 | 72.71 | \| | 6.20 | 13.80 |
| `['Add']` | `False` | \| | 200.45 | 129.40 | \| | 5.00 | 7.73 |
| `['Add']` | `True` | \| | 154.68 | 136.42 | \| | 6.47 | 7.40 |
| `[]` | `False` | \| | 166.97 | 162.15 | \| | 6.00 | 6.20 |
| `[]` | `True` | \| | 166.32 | 162.81 | \| | 6.07 | 6.20 |
Below, time metrics for batch size = 1, input length = 128.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 199.48 | 70.98 | \| | 5.07 | 14.13 |
| `['Add', 'MatMul']` | `True` | \| | 199.65 | 71.78 | \| | 5.07 | 13.93 |
| `['Add']` | `False` | \| | 199.08 | 137.97 | \| | 5.07 | 7.27 |
| `['Add']` | `True` | \| | 189.93 | 162.45 | \| | 5.33 | 6.20 |
| `[]` | `False` | \| | 191.63 | 162.54 | \| | 5.27 | 6.20 |
| `[]` | `True` | \| | 200.38 | 162.55 | \| | 5.00 | 6.20 |
Below, time metrics for batch size = 4, input length = 32.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 655.84 | 243.33 | \| | 1.53 | 4.13 |
| `['Add', 'MatMul']` | `True` | \| | 661.27 | 221.16 | \| | 1.53 | 4.53 |
| `['Add']` | `False` | \| | 662.84 | 529.28 | \| | 1.53 | 1.93 |
| `['Add']` | `True` | \| | 512.47 | 470.66 | \| | 2.00 | 2.13 |
| `[]` | `False` | \| | 562.81 | 501.77 | \| | 1.80 | 2.00 |
| `[]` | `True` | \| | 505.81 | 521.20 | \| | 2.00 | 1.93 |
Below, time metrics for batch size = 4, input length = 64.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 654.58 | 258.54 | \| | 1.53 | 3.93 |
| `['Add', 'MatMul']` | `True` | \| | 617.44 | 234.05 | \| | 1.67 | 4.33 |
| `['Add']` | `False` | \| | 661.51 | 478.81 | \| | 1.53 | 2.13 |
| `['Add']` | `True` | \| | 657.01 | 660.23 | \| | 1.53 | 1.53 |
| `[]` | `False` | \| | 661.64 | 474.28 | \| | 1.53 | 2.13 |
| `[]` | `True` | \| | 661.29 | 471.09 | \| | 1.53 | 2.13 |
Below, time metrics for batch size = 4, input length = 128.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 654.80 | 219.38 | \| | 1.53 | 4.60 |
| `['Add', 'MatMul']` | `True` | \| | 663.50 | 222.37 | \| | 1.53 | 4.53 |
| `['Add']` | `False` | \| | 625.56 | 529.02 | \| | 1.60 | 1.93 |
| `['Add']` | `True` | \| | 655.08 | 499.41 | \| | 1.53 | 2.07 |
| `[]` | `False` | \| | 655.92 | 473.01 | \| | 1.53 | 2.13 |
| `[]` | `True` | \| | 505.54 | 659.92 | \| | 2.00 | 1.53 |
Below, time metrics for batch size = 8, input length = 32.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 968.83 | 443.80 | \| | 1.07 | 2.27 |
| `['Add', 'MatMul']` | `True` | \| | 1255.70 | 489.55 | \| | 0.80 | 2.07 |
| `['Add']` | `False` | \| | 1301.35 | 938.14 | \| | 0.80 | 1.07 |
| `['Add']` | `True` | \| | 1279.54 | 931.91 | \| | 0.80 | 1.13 |
| `[]` | `False` | \| | 1292.66 | 1318.07 | \| | 0.80 | 0.80 |
| `[]` | `True` | \| | 1290.35 | 1314.74 | \| | 0.80 | 0.80 |
Below, time metrics for batch size = 8, input length = 64.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 1305.45 | 438.06 | \| | 0.80 | 2.33 |
| `['Add', 'MatMul']` | `True` | \| | 1296.68 | 450.40 | \| | 0.80 | 2.27 |
| `['Add']` | `False` | \| | 968.21 | 949.81 | \| | 1.07 | 1.07 |
| `['Add']` | `True` | \| | 1012.35 | 1317.46 | \| | 1.00 | 0.80 |
| `[]` | `False` | \| | 1213.91 | 961.79 | \| | 0.87 | 1.07 |
| `[]` | `True` | \| | 956.39 | 945.41 | \| | 1.07 | 1.07 |
Below, time metrics for batch size = 8, input length = 128.
| operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `['Add', 'MatMul']` | `False` | \| | 1120.12 | 497.17 | \| | 0.93 | 2.07 |
| `['Add', 'MatMul']` | `True` | \| | 1289.50 | 443.46 | \| | 0.80 | 2.27 |
| `['Add']` | `False` | \| | 1294.65 | 930.97 | \| | 0.80 | 1.13 |
| `['Add']` | `True` | \| | 1181.21 | 933.82 | \| | 0.87 | 1.13 |
| `[]` | `False` | \| | 1245.61 | 1318.07 | \| | 0.87 | 0.80 |
| `[]` | `True` | \| | 1285.81 | 1318.82 | \| | 0.80 | 0.80 |
|
fxmarty/20220712-h08m02s04_example | fxmarty | 2022-07-12T08:02:09Z | 0 | 0 | null | [
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"region:us"
] | token-classification | 2022-07-12T08:02:04Z | ---
pipeline_tag: token-classification
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
tags:
- distilbert
---
**task**: `token-classification`
**Backend:** `sagemaker-training`
**Backend args:** `{'instance_type': 'ml.m5.2xlarge', 'supported_instructions': 'avx512'}`
**Number of evaluation samples:** `All dataset`
Fixed parameters:
* **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english`
* **dataset**:
* **path**: `conll2003`
* **eval_split**: `validation`
* **data_keys**: `{'primary': 'tokens'}`
* **ref_keys**: `['ner_tags']`
* **calibration_split**: `train`
* **node_exclusion**: `[]`
* **per_channel**: `False`
* **calibration**:
* **method**: `minmax`
* **num_calibration_samples**: `100`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `11`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **quantization_approach**: `dynamic`, `static`
* **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']`
# Evaluation
## Non-time metrics
| quantization_approach | operators_to_quantize | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) |
| :-------------------: | :-------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 0.936 | 0.935 | \| | 0.944 | 0.943 | \| | 0.940 | 0.939 | \| | 0.988 | 0.988 |
| `dynamic` | `['Add']` | \| | 0.936 | 0.936 | \| | 0.944 | 0.944 | \| | 0.940 | 0.940 | \| | 0.988 | 0.988 |
| `static` | `['Add', 'MatMul']` | \| | 0.936 | 0.063 | \| | 0.944 | 0.246 | \| | 0.940 | 0.100 | \| | 0.988 | 0.343 |
| `static` | `['Add']` | \| | 0.936 | 0.050 | \| | 0.944 | 0.160 | \| | 0.940 | 0.076 | \| | 0.988 | 0.311 |
## Time metrics
Time benchmarks were run for 15 seconds per config.
Below, time metrics for batch size = 1, input length = 32.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 46.38 | 9.96 | \| | 21.60 | 100.47 |
| `dynamic` | `['Add']` | \| | 36.59 | 13.98 | \| | 27.33 | 71.60 |
| `static` | `['Add', 'MatMul']` | \| | 33.84 | 14.46 | \| | 29.60 | 69.20 |
| `static` | `['Add']` | \| | 33.23 | 20.11 | \| | 30.13 | 49.73 |
Below, time metrics for batch size = 1, input length = 64.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 58.92 | 19.68 | \| | 17.00 | 50.87 |
| `dynamic` | `['Add']` | \| | 58.59 | 24.81 | \| | 17.13 | 40.33 |
| `static` | `['Add', 'MatMul']` | \| | 51.41 | 29.36 | \| | 19.47 | 34.07 |
| `static` | `['Add']` | \| | 44.22 | 38.56 | \| | 22.67 | 25.93 |
Below, time metrics for batch size = 1, input length = 128.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 72.38 | 36.47 | \| | 13.87 | 27.47 |
| `dynamic` | `['Add']` | \| | 70.21 | 46.30 | \| | 14.27 | 21.60 |
| `static` | `['Add', 'MatMul']` | \| | 70.76 | 48.24 | \| | 14.13 | 20.80 |
| `static` | `['Add']` | \| | 72.47 | 71.10 | \| | 13.80 | 14.07 |
Below, time metrics for batch size = 4, input length = 32.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 69.76 | 38.50 | \| | 14.40 | 26.00 |
| `dynamic` | `['Add']` | \| | 56.02 | 51.32 | \| | 17.87 | 19.53 |
| `static` | `['Add', 'MatMul']` | \| | 55.05 | 46.80 | \| | 18.20 | 21.40 |
| `static` | `['Add']` | \| | 71.03 | 56.82 | \| | 14.13 | 17.67 |
Below, time metrics for batch size = 4, input length = 64.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 119.91 | 61.51 | \| | 8.40 | 16.27 |
| `dynamic` | `['Add']` | \| | 108.43 | 105.65 | \| | 9.27 | 9.47 |
| `static` | `['Add', 'MatMul']` | \| | 119.89 | 86.76 | \| | 8.40 | 11.53 |
| `static` | `['Add']` | \| | 96.99 | 102.03 | \| | 10.33 | 9.87 |
Below, time metrics for batch size = 4, input length = 128.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 219.78 | 123.71 | \| | 4.60 | 8.13 |
| `dynamic` | `['Add']` | \| | 220.13 | 187.21 | \| | 4.60 | 5.40 |
| `static` | `['Add', 'MatMul']` | \| | 186.39 | 176.99 | \| | 5.40 | 5.67 |
| `static` | `['Add']` | \| | 219.57 | 203.71 | \| | 4.60 | 4.93 |
Below, time metrics for batch size = 8, input length = 32.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 118.32 | 59.22 | \| | 8.47 | 16.93 |
| `dynamic` | `['Add']` | \| | 116.52 | 80.17 | \| | 8.60 | 12.53 |
| `static` | `['Add', 'MatMul']` | \| | 116.59 | 83.55 | \| | 8.60 | 12.00 |
| `static` | `['Add']` | \| | 115.81 | 126.53 | \| | 8.67 | 7.93 |
Below, time metrics for batch size = 8, input length = 64.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 172.71 | 117.89 | \| | 5.80 | 8.53 |
| `dynamic` | `['Add']` | \| | 166.05 | 156.99 | \| | 6.07 | 6.40 |
| `static` | `['Add', 'MatMul']` | \| | 215.00 | 148.93 | \| | 4.67 | 6.73 |
| `static` | `['Add']` | \| | 214.55 | 200.16 | \| | 4.67 | 5.00 |
Below, time metrics for batch size = 8, input length = 128.
| quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) |
| :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: |
| `dynamic` | `['Add', 'MatMul']` | \| | 403.69 | 307.36 | \| | 2.53 | 3.27 |
| `dynamic` | `['Add']` | \| | 372.85 | 317.53 | \| | 2.73 | 3.20 |
| `static` | `['Add', 'MatMul']` | \| | 352.18 | 320.85 | \| | 2.87 | 3.13 |
| `static` | `['Add']` | \| | 403.55 | 410.17 | \| | 2.53 | 2.47 |
|
AntiSquid/TEST2ppo-LunarLander-v2 | AntiSquid | 2022-07-12T07:10:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-06T21:53:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 285.66 +/- 15.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
reecejocumsenbb/testfield-finetuned-imdb | reecejocumsenbb | 2022-07-12T06:02:47Z | 5 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-12T04:23:21Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: reecejocumsenbb/testfield-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# reecejocumsenbb/testfield-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0451
- Validation Loss: 3.9664
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -993, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.0451 | 3.9664 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Shaier/medqa_fine_tuned_linkbert | Shaier | 2022-07-12T04:48:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-07-12T03:27:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: medqa_fine_tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medqa_fine_tuned
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4462
- Accuracy: 0.4002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.3208 | 0.3553 |
| 1.2802 | 2.0 | 636 | 1.3428 | 0.3703 |
| 1.2802 | 3.0 | 954 | 1.3780 | 0.3892 |
| 1.1466 | 4.0 | 1272 | 1.4234 | 0.3978 |
| 1.052 | 5.0 | 1590 | 1.4462 | 0.4002 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
Evelyn18/legalectra-small-spanish-becasv3-5 | Evelyn18 | 2022-07-12T04:45:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-12T04:43:31Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-5
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.7715 |
| No log | 2.0 | 10 | 5.7001 |
| No log | 3.0 | 15 | 5.6206 |
| No log | 4.0 | 20 | 5.5463 |
| No log | 5.0 | 25 | 5.4866 |
| No log | 6.0 | 30 | 5.4369 |
| No log | 7.0 | 35 | 5.3939 |
| No log | 8.0 | 40 | 5.3545 |
| No log | 9.0 | 45 | 5.3168 |
| No log | 10.0 | 50 | 5.2824 |
| No log | 11.0 | 55 | 5.2504 |
| No log | 12.0 | 60 | 5.2193 |
| No log | 13.0 | 65 | 5.1864 |
| No log | 14.0 | 70 | 5.1515 |
| No log | 15.0 | 75 | 5.1174 |
| No log | 16.0 | 80 | 5.0839 |
| No log | 17.0 | 85 | 5.0497 |
| No log | 18.0 | 90 | 5.0188 |
| No log | 19.0 | 95 | 4.9937 |
| No log | 20.0 | 100 | 4.9726 |
| No log | 21.0 | 105 | 4.9483 |
| No log | 22.0 | 110 | 4.9205 |
| No log | 23.0 | 115 | 4.8993 |
| No log | 24.0 | 120 | 4.8802 |
| No log | 25.0 | 125 | 4.8612 |
| No log | 26.0 | 130 | 4.8498 |
| No log | 27.0 | 135 | 4.8294 |
| No log | 28.0 | 140 | 4.8176 |
| No log | 29.0 | 145 | 4.8144 |
| No log | 30.0 | 150 | 4.8012 |
| No log | 31.0 | 155 | 4.7890 |
| No log | 32.0 | 160 | 4.7745 |
| No log | 33.0 | 165 | 4.7641 |
| No log | 34.0 | 170 | 4.7558 |
| No log | 35.0 | 175 | 4.7474 |
| No log | 36.0 | 180 | 4.7384 |
| No log | 37.0 | 185 | 4.7319 |
| No log | 38.0 | 190 | 4.7262 |
| No log | 39.0 | 195 | 4.7225 |
| No log | 40.0 | 200 | 4.7201 |
| No log | 41.0 | 205 | 4.7165 |
| No log | 42.0 | 210 | 4.7129 |
| No log | 43.0 | 215 | 4.7111 |
| No log | 44.0 | 220 | 4.7086 |
| No log | 45.0 | 225 | 4.7060 |
| No log | 46.0 | 230 | 4.7049 |
| No log | 47.0 | 235 | 4.7036 |
| No log | 48.0 | 240 | 4.7028 |
| No log | 49.0 | 245 | 4.7023 |
| No log | 50.0 | 250 | 4.7020 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-small-spanish-becasv3-4 | Evelyn18 | 2022-07-12T04:38:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-12T04:36:14Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-4
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.6625 |
| No log | 2.0 | 10 | 5.4940 |
| No log | 3.0 | 15 | 5.3886 |
| No log | 4.0 | 20 | 5.3004 |
| No log | 5.0 | 25 | 5.2210 |
| No log | 6.0 | 30 | 5.1434 |
| No log | 7.0 | 35 | 5.0546 |
| No log | 8.0 | 40 | 4.9726 |
| No log | 9.0 | 45 | 4.9227 |
| No log | 10.0 | 50 | 4.8344 |
| No log | 11.0 | 55 | 4.7749 |
| No log | 12.0 | 60 | 4.7381 |
| No log | 13.0 | 65 | 4.7016 |
| No log | 14.0 | 70 | 4.6581 |
| No log | 15.0 | 75 | 4.6231 |
| No log | 16.0 | 80 | 4.5900 |
| No log | 17.0 | 85 | 4.5446 |
| No log | 18.0 | 90 | 4.5041 |
| No log | 19.0 | 95 | 4.4635 |
| No log | 20.0 | 100 | 4.4356 |
| No log | 21.0 | 105 | 4.3985 |
| No log | 22.0 | 110 | 4.3650 |
| No log | 23.0 | 115 | 4.3540 |
| No log | 24.0 | 120 | 4.3270 |
| No log | 25.0 | 125 | 4.2873 |
| No log | 26.0 | 130 | 4.2808 |
| No log | 27.0 | 135 | 4.2623 |
| No log | 28.0 | 140 | 4.2466 |
| No log | 29.0 | 145 | 4.2488 |
| No log | 30.0 | 150 | 4.2410 |
| No log | 31.0 | 155 | 4.2187 |
| No log | 32.0 | 160 | 4.2000 |
| No log | 33.0 | 165 | 4.1883 |
| No log | 34.0 | 170 | 4.1803 |
| No log | 35.0 | 175 | 4.1773 |
| No log | 36.0 | 180 | 4.1652 |
| No log | 37.0 | 185 | 4.1614 |
| No log | 38.0 | 190 | 4.1609 |
| No log | 39.0 | 195 | 4.1652 |
| No log | 40.0 | 200 | 4.1560 |
| No log | 41.0 | 205 | 4.1435 |
| No log | 42.0 | 210 | 4.1463 |
| No log | 43.0 | 215 | 4.1434 |
| No log | 44.0 | 220 | 4.1340 |
| No log | 45.0 | 225 | 4.1259 |
| No log | 46.0 | 230 | 4.1212 |
| No log | 47.0 | 235 | 4.1224 |
| No log | 48.0 | 240 | 4.1257 |
| No log | 49.0 | 245 | 4.1284 |
| No log | 50.0 | 250 | 4.1290 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Evelyn18/legalectra-small-spanish-becasv3-3 | Evelyn18 | 2022-07-12T04:30:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-12T04:28:15Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-3
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 5.7608 |
| No log | 2.0 | 10 | 5.5991 |
| No log | 3.0 | 15 | 5.5162 |
| No log | 4.0 | 20 | 5.4370 |
| No log | 5.0 | 25 | 5.3521 |
| No log | 6.0 | 30 | 5.2657 |
| No log | 7.0 | 35 | 5.1771 |
| No log | 8.0 | 40 | 5.1024 |
| No log | 9.0 | 45 | 5.0248 |
| No log | 10.0 | 50 | 4.9609 |
| No log | 11.0 | 55 | 4.9167 |
| No log | 12.0 | 60 | 4.8487 |
| No log | 13.0 | 65 | 4.8175 |
| No log | 14.0 | 70 | 4.7646 |
| No log | 15.0 | 75 | 4.7276 |
| No log | 16.0 | 80 | 4.7003 |
| No log | 17.0 | 85 | 4.6518 |
| No log | 18.0 | 90 | 4.6240 |
| No log | 19.0 | 95 | 4.6033 |
| No log | 20.0 | 100 | 4.5601 |
| No log | 21.0 | 105 | 4.5433 |
| No log | 22.0 | 110 | 4.5279 |
| No log | 23.0 | 115 | 4.4981 |
| No log | 24.0 | 120 | 4.4831 |
| No log | 25.0 | 125 | 4.4745 |
| No log | 26.0 | 130 | 4.4607 |
| No log | 27.0 | 135 | 4.4528 |
| No log | 28.0 | 140 | 4.4348 |
| No log | 29.0 | 145 | 4.4418 |
| No log | 30.0 | 150 | 4.4380 |
| No log | 31.0 | 155 | 4.4205 |
| No log | 32.0 | 160 | 4.4373 |
| No log | 33.0 | 165 | 4.4302 |
| No log | 34.0 | 170 | 4.4468 |
| No log | 35.0 | 175 | 4.4512 |
| No log | 36.0 | 180 | 4.4225 |
| No log | 37.0 | 185 | 4.4303 |
| No log | 38.0 | 190 | 4.4562 |
| No log | 39.0 | 195 | 4.4671 |
| No log | 40.0 | 200 | 4.4869 |
| No log | 41.0 | 205 | 4.5046 |
| No log | 42.0 | 210 | 4.4990 |
| No log | 43.0 | 215 | 4.4847 |
| No log | 44.0 | 220 | 4.4770 |
| No log | 45.0 | 225 | 4.4786 |
| No log | 46.0 | 230 | 4.4741 |
| No log | 47.0 | 235 | 4.4797 |
| No log | 48.0 | 240 | 4.4830 |
| No log | 49.0 | 245 | 4.4845 |
| No log | 50.0 | 250 | 4.4873 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Saraswati/q-FrozenLake-v1-4x4-noSlippery | Saraswati | 2022-07-12T04:25:49Z | 0 | 1 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-12T04:25:40Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Saraswati/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Evelyn18/legalectra-small-spanish-becasv3-1 | Evelyn18 | 2022-07-12T03:54:49Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:becasv2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-12T03:49:49Z | ---
tags:
- generated_from_trainer
datasets:
- becasv2
model-index:
- name: legalectra-small-spanish-becasv3-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legalectra-small-spanish-becasv3-1
This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 8 | 5.8980 |
| No log | 2.0 | 16 | 5.8136 |
| No log | 3.0 | 24 | 5.7452 |
| No log | 4.0 | 32 | 5.6940 |
| No log | 5.0 | 40 | 5.6554 |
| No log | 6.0 | 48 | 5.6241 |
| No log | 7.0 | 56 | 5.5997 |
| No log | 8.0 | 64 | 5.5830 |
| No log | 9.0 | 72 | 5.5730 |
| No log | 10.0 | 80 | 5.5694 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/hhelafifi | huggingtweets | 2022-07-12T02:49:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-12T02:32:46Z | ---
language: en
thumbnail: http://www.huggingtweets.com/hhelafifi/1657594186366/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1147337070920097793/06CZyryx_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Hussein</div>
<div style="text-align: center; font-size: 14px;">@hhelafifi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Hussein.
| Data | Hussein |
| --- | --- |
| Tweets downloaded | 820 |
| Retweets | 191 |
| Short tweets | 95 |
| Tweets kept | 534 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1j7uxays/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hhelafifi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20d5foa3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20d5foa3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hhelafifi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/yolov6s | nateraw | 2022-07-12T02:01:18Z | 0 | 0 | pytorch | [
"pytorch",
"object-detection",
"yolo",
"autogenerated-modelcard",
"en",
"arxiv:1910.09700",
"license:gpl-3.0",
"region:us"
] | object-detection | 2022-07-08T04:01:40Z | ---
language: en
license: gpl-3.0
library_name: pytorch
tags:
- object-detection
- yolo
- autogenerated-modelcard
model_name: yolov6s
---
# Model Card for yolov6s
<!-- Provide a quick summary of what the model is/does. -->
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Examination](#model-examination)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#technical-specifications-optional)
9. [Citation](#citation)
10. [Glossary](#glossary-optional)
11. [More Information](#more-information-optional)
12. [Model Card Authors](#model-card-authors-optional)
13. [Model Card Contact](#model-card-contact)
14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.
- **Developed by:** [More Information Needed]
- **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw)
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Related Models:** [yolov6t](https://hf.co/nateraw/yolov6t), [yolov6n](https://hf.co/nateraw/yolov6n)
- **Parent Model:** N/A
- **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is meant to be used as a general object detector.
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
You can fine-tune this model for your specific task
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Don't be evil.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
# Model Examination
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
Please refer to the [official GitHub Repository](https://github.com/meituan/YOLOv6)
# Model Card Authors [optional]
[@nateraw](https://hf.co/nateraw)
# Model Card Contact
[@nateraw](https://hf.co/nateraw) - please leave a note in the discussions tab here
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
[More Information Needed]
</details> |
nateraw/yolov6n | nateraw | 2022-07-12T02:01:10Z | 0 | 0 | pytorch | [
"pytorch",
"object-detection",
"yolo",
"autogenerated-modelcard",
"en",
"arxiv:1910.09700",
"license:gpl-3.0",
"region:us"
] | object-detection | 2022-07-08T04:01:21Z | ---
language: en
license: gpl-3.0
library_name: pytorch
tags:
- object-detection
- yolo
- autogenerated-modelcard
model_name: yolov6n
---
# Model Card for yolov6n
<!-- Provide a quick summary of what the model is/does. -->
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Model Examination](#model-examination)
7. [Environmental Impact](#environmental-impact)
8. [Technical Specifications](#technical-specifications-optional)
9. [Citation](#citation)
10. [Glossary](#glossary-optional)
11. [More Information](#more-information-optional)
12. [Model Card Authors](#model-card-authors-optional)
13. [Model Card Contact](#model-card-contact)
14. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.
- **Developed by:** [More Information Needed]
- **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw)
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Related Models:** [yolov6t](https://hf.co/nateraw/yolov6t), [yolov6s](https://hf.co/nateraw/yolov6s)
- **Parent Model:** N/A
- **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6)
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is meant to be used as a general object detector.
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
You can fine-tune this model for your specific task
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Don't be evil.
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
# Model Examination
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
Please refer to the [official GitHub Repository](https://github.com/meituan/YOLOv6)
# Model Card Authors [optional]
[@nateraw](https://hf.co/nateraw)
# Model Card Contact
[@nateraw](https://hf.co/nateraw) - please leave a note in the discussions tab here
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
[More Information Needed]
</details> |
ArthurBaia/xlm-roberta-base-squad-pt | ArthurBaia | 2022-07-11T22:42:37Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v1_pt",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-11T16:59:16Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v1_pt
model-index:
- name: xlm-roberta-base-squad-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-squad-pt
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the squad_v1_pt dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
- "epoch": 3.0,
- "eval_exact_match": 44.45600756859035,
- "eval_f1": 57.37953911779836,
- "eval_samples": 11095
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1 |
skr1125/distilbert-base-uncased-finetuned-emotion | skr1125 | 2022-07-11T20:35:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-07-11T20:17:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9267721491352747
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2253
- Accuracy: 0.927
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8507 | 1.0 | 250 | 0.3406 | 0.899 | 0.8954 |
| 0.2546 | 2.0 | 500 | 0.2253 | 0.927 | 0.9268 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sahilrajpal121/train5a1e8w7-label-classification | sahilrajpal121 | 2022-07-11T20:11:11Z | 0 | 0 | sklearn | [
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] | tabular-classification | 2022-07-11T20:11:07Z | ---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on train5a1e8w7 to apply classification on label
**Metrics of the best model:**
accuracy 0.693101
recall_macro 0.665973
precision_macro 0.657625
f1_macro 0.656998
Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
v_21 False False False ... False False False
v_32 True False False ... False False False
v_15 False False False ... False False False
v_4 True False False ... False False False
v_1 False False False ... False False False
v_8 False False False ... False False False
v_12 False False Fa...
v_34 False False False ... False False False
v_35 True False False ... False False False
v_36 True False False ... False False False
v_37 True False False ... False False False
v_38 True False False ... False False False
v_39 True False False ... False False False
v_40 False False False ... False False False[40 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
v_21 False False False ... False False False
v_32 True False False ... False False False
v_15 False False False ... False False False
v_4 True False False ... False False False
v_1 False False False ... False False False
v_8 False False False ... False False False
v_12 False False Fa...
v_34 False False False ... False False False
v_35 True False False ... False False False
v_36 True False False ... False False False
v_37 True False False ... False False False
v_38 True False False ... False False False
v_39 True False False ... False False False
v_40 False False False ... False False False[40 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
v_21 False False False ... False False False
v_32 True False False ... False False False
v_15 False False False ... False False False
v_4 True False False ... False False False
v_1 False False False ... False False False
v_8 False False False ... False False False
v_12 False False False ... False False False
v_25 True False Fa...
v_7 True False False ... False False False
v_2 True False False ... False False False
v_16 True False False ... False False False
v_34 False False False ... False False False
v_35 True False False ... False False False
v_36 True False False ... False False False
v_37 True False False ... False False False
v_38 True False False ... False False False
v_39 True False False ... False False False
v_40 False False False ... False False False[40 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt |
jonatasgrosman/exp_w2v2t_pt_vp-it_s738 | jonatasgrosman | 2022-07-11T20:09:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T20:08:31Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-it_s738
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-it_s996 | jonatasgrosman | 2022-07-11T19:59:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:58:21Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-it_s996
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s732 | jonatasgrosman | 2022-07-11T19:54:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:54:29Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_r-wav2vec2_s732
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s957 | jonatasgrosman | 2022-07-11T19:51:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:51:07Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_r-wav2vec2_s957
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s468 | jonatasgrosman | 2022-07-11T19:48:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:47:54Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_r-wav2vec2_s468
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_xls-r_s689 | jonatasgrosman | 2022-07-11T19:41:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:40:50Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_xls-r_s689
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_xls-r_s17 | jonatasgrosman | 2022-07-11T19:38:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:37:21Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_xls-r_s17
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
KD02/distilbert-base-uncased-finetuned-squad | KD02 | 2022-07-11T19:37:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-11T14:14:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [KD02/distilbert-base-uncased-finetuned-squad](https://huggingface.co/KD02/distilbert-base-uncased-finetuned-squad) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s103 | jonatasgrosman | 2022-07-11T19:34:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:33:36Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech-sat_s103
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s377 | jonatasgrosman | 2022-07-11T19:30:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:29:59Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech-sat_s377
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s756 | jonatasgrosman | 2022-07-11T19:26:48Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech-sat",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:26:24Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech-sat_s756
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-nl_s833 | jonatasgrosman | 2022-07-11T19:13:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:12:53Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-nl_s833
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-es_s291 | jonatasgrosman | 2022-07-11T19:09:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:08:58Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-es_s291
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-es_s506 | jonatasgrosman | 2022-07-11T19:05:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:04:54Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-es_s506
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-es_s454 | jonatasgrosman | 2022-07-11T19:02:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T19:01:28Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-es_s454
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-fr_s752 | jonatasgrosman | 2022-07-11T18:58:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:57:25Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-fr_s752
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-fr_s485 | jonatasgrosman | 2022-07-11T18:54:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:53:30Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-fr_s485
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s808 | jonatasgrosman | 2022-07-11T18:31:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:30:46Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech-ml_s808
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s324 | jonatasgrosman | 2022-07-11T18:27:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:26:59Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech-ml_s324
Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_wavlm_s118 | jonatasgrosman | 2022-07-11T18:23:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:22:59Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_wavlm_s118
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_wavlm_s691 | jonatasgrosman | 2022-07-11T18:13:28Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wavlm",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:13:02Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_wavlm_s691
Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_no-pretraining_s34 | jonatasgrosman | 2022-07-11T18:06:01Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T18:05:36Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_no-pretraining_s34
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-sv_s563 | jonatasgrosman | 2022-07-11T17:51:15Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T17:50:36Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-sv_s563
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_vp-sv_s612 | jonatasgrosman | 2022-07-11T17:47:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T17:47:09Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_vp-sv_s612
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_hubert_s807 | jonatasgrosman | 2022-07-11T17:36:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T17:36:06Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_hubert_s807
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
ianspektor/reinforce-CartPole-v1 | ianspektor | 2022-07-11T17:36:19Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-11T16:33:35Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-CartPole-v1
results:
- metrics:
- type: mean_reward
value: 359.42 +/- 89.49
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner | kinanmartin | 2022-07-11T17:29:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:toydata",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-11T03:49:46Z | ---
tags:
- generated_from_trainer
datasets:
- toydata
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large-ner-hrl-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: toydata
type: toydata
args: SDN
metrics:
- name: Precision
type: precision
value: 0.9132452695465905
- name: Recall
type: recall
value: 0.9205854126679462
- name: F1
type: f1
value: 0.9169006511739053
- name: Accuracy
type: accuracy
value: 0.9784804945824268
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-ner-hrl-finetuned-ner
This model is a fine-tuned version of [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl) on the toydata dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0944
- Precision: 0.9132
- Recall: 0.9206
- F1: 0.9169
- Accuracy: 0.9785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 408 | 0.0900 | 0.8508 | 0.9303 | 0.8888 | 0.9719 |
| 0.1087 | 2.0 | 816 | 0.0827 | 0.9043 | 0.9230 | 0.9136 | 0.9783 |
| 0.0503 | 3.0 | 1224 | 0.0944 | 0.9132 | 0.9206 | 0.9169 | 0.9785 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_pt_unispeech_s186 | jonatasgrosman | 2022-07-11T17:26:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T17:26:14Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_unispeech_s186
Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_xlsr-53_s829 | jonatasgrosman | 2022-07-11T17:23:34Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T17:23:00Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_xlsr-53_s829
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_xlsr-53_s677 | jonatasgrosman | 2022-07-11T17:17:00Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T17:16:33Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_xlsr-53_s677
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_wav2vec2_s859 | jonatasgrosman | 2022-07-11T16:58:14Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:57:41Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_wav2vec2_s859
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_pt_wav2vec2_s515 | jonatasgrosman | 2022-07-11T16:54:47Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:54:22Z | ---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- pt
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_pt_wav2vec2_s515
Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_vp-it_s179 | jonatasgrosman | 2022-07-11T16:44:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:44:09Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_vp-it_s179
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_vp-it_s438 | jonatasgrosman | 2022-07-11T16:41:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:40:28Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_vp-it_s438
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s227 | jonatasgrosman | 2022-07-11T16:34:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:33:36Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_r-wav2vec2_s227
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s870 | jonatasgrosman | 2022-07-11T16:30:36Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:29:58Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_r-wav2vec2_s870
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_r-wav2vec2_s809 | jonatasgrosman | 2022-07-11T16:26:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:26:08Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_r-wav2vec2_s809
Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
AdiKompella/Reinforce-CartPole | AdiKompella | 2022-07-11T16:26:05Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-07-11T16:25:53Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- metrics:
- type: mean_reward
value: 276.70 +/- 57.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
jonatasgrosman/exp_w2v2t_es_xls-r_s691 | jonatasgrosman | 2022-07-11T16:19:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:18:30Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_xls-r_s691
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_xls-r_s118 | jonatasgrosman | 2022-07-11T16:13:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T16:12:22Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_xls-r_s118
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
sledz08/finetuned-bert-piqa | sledz08 | 2022-07-11T15:54:20Z | 58 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:piqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-07-11T15:23:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- piqa
metrics:
- accuracy
model-index:
- name: finetuned-bert-piqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-piqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the piqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6603
- Accuracy: 0.6518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 251 | 0.6751 | 0.6115 |
| 0.6628 | 2.0 | 502 | 0.6556 | 0.6534 |
| 0.6628 | 3.0 | 753 | 0.6603 | 0.6518 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nateraw/keras-dummy-model-mixin-demo | nateraw | 2022-07-11T15:42:05Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
ericntay/clinical_bert_ft | ericntay | 2022-07-11T15:30:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-07-11T10:38:42Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: clinical_bert_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical_bert_ft
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2439
- F1: 0.8252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5938 | 1.0 | 95 | 0.2480 | 0.7084 |
| 0.1567 | 2.0 | 190 | 0.2035 | 0.7855 |
| 0.083 | 3.0 | 285 | 0.2002 | 0.8026 |
| 0.0482 | 4.0 | 380 | 0.2046 | 0.8118 |
| 0.0269 | 5.0 | 475 | 0.2230 | 0.8143 |
| 0.0185 | 6.0 | 570 | 0.2178 | 0.8175 |
| 0.0123 | 7.0 | 665 | 0.2269 | 0.8253 |
| 0.0093 | 8.0 | 760 | 0.2421 | 0.8227 |
| 0.0072 | 9.0 | 855 | 0.2446 | 0.8267 |
| 0.006 | 10.0 | 950 | 0.2439 | 0.8252 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ariesutiono/finetuned-test-1 | ariesutiono | 2022-07-11T14:57:10Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-07-11T13:24:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: finetuned-test-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-test-1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8219 | 1.0 | 30 | 2.3343 |
| 2.4148 | 2.0 | 60 | 2.2010 |
| 2.3236 | 3.0 | 90 | 2.1442 |
| 2.2231 | 4.0 | 120 | 2.1651 |
| 2.2171 | 5.0 | 150 | 2.0614 |
| 2.127 | 6.0 | 180 | 2.0405 |
| 2.0748 | 7.0 | 210 | 2.0092 |
| 2.0511 | 8.0 | 240 | 1.9798 |
| 2.0097 | 9.0 | 270 | 1.8662 |
| 1.9969 | 10.0 | 300 | 1.9257 |
| 2.0006 | 11.0 | 330 | 1.9386 |
| 1.9273 | 12.0 | 360 | 1.9357 |
| 1.9177 | 13.0 | 390 | 1.8983 |
| 1.9128 | 14.0 | 420 | 1.8990 |
| 1.8979 | 15.0 | 450 | 1.9037 |
| 1.8721 | 16.0 | 480 | 1.8440 |
| 1.8998 | 17.0 | 510 | 1.8404 |
| 1.8862 | 18.0 | 540 | 1.9193 |
| 1.9133 | 19.0 | 570 | 1.8494 |
| 1.8799 | 20.0 | 600 | 1.8192 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jonatasgrosman/exp_w2v2t_es_vp-es_s250 | jonatasgrosman | 2022-07-11T14:23:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T14:22:53Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_vp-es_s250
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
jonatasgrosman/exp_w2v2t_es_vp-es_s515 | jonatasgrosman | 2022-07-11T13:49:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"es",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-07-11T13:48:54Z | ---
language:
- es
license: apache-2.0
tags:
- automatic-speech-recognition
- es
datasets:
- mozilla-foundation/common_voice_7_0
---
# exp_w2v2t_es_vp-es_s515
Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
Dudul/dudul | Dudul | 2022-07-11T13:09:08Z | 0 | 0 | null | [
"region:us"
] | null | 2022-07-11T01:50:50Z | ---
title: Cryptopunks Generator
emoji: 🧠➡️🙍♀️
colorFrom: red
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
egg22314/LaserTube | egg22314 | 2022-07-11T13:03:19Z | 0 | 1 | null | [
"region:us"
] | null | 2022-07-11T13:01:55Z | Watching YouTube videos too boring for you? Wish you could be punished for not clicking on stuff fast enough while you watch a cat play the piano? Well, LaserTube is here to solve that problem, by letting you turn any YouTube video into a genuine simulation of an oldschool laserdisc arcade game!
Work in progress. |
Subsets and Splits