modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
lewtun/autotrain-acronym-identification-7324788
|
lewtun
| 2022-08-25T13:34:54Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain",
"en",
"dataset:lewtun/autotrain-data-acronym-identification",
"dataset:acronym_identification",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-24T10:11:47Z |
---
tags:
- autotrain
language: en
widget:
- text: "I love AutoTrain \U0001F917"
datasets:
- lewtun/autotrain-data-acronym-identification
- acronym_identification
co2_eq_emissions: 10.435358044493652
model-index:
- name: autotrain-demo
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: acronym_identification
type: acronym_identification
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9708090976211485
- task:
type: token-classification
name: Token Classification
dataset:
name: acronym_identification
type: acronym_identification
config: default
split: train
metrics:
- name: Accuracy
type: accuracy
value: 0.9790777669399117
verified: true
- name: Precision
type: precision
value: 0.9197835301644851
verified: true
- name: Recall
type: recall
value: 0.946479027789208
verified: true
- name: F1
type: f1
value: 0.9329403493591477
verified: true
- name: loss
type: loss
value: 0.06360606849193573
verified: true
- task:
type: token-classification
name: Token Classification
dataset:
name: acronym_identification
type: acronym_identification
config: default
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9758354452761242
verified: true
- name: Precision
type: precision
value: 0.9339674814732883
verified: true
- name: Recall
type: recall
value: 0.9159344831326608
verified: true
- name: F1
type: f1
value: 0.9248630887185104
verified: true
- name: loss
type: loss
value: 0.07593930512666702
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 7324788
- CO2 Emissions (in grams): 10.435358044493652
## Validation Metrics
- Loss: 0.08991389721632004
- Accuracy: 0.9708090976211485
- Precision: 0.8998421675654347
- Recall: 0.9309429854401959
- F1: 0.9151284109149278
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/lewtun/autotrain-acronym-identification-7324788
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("lewtun/autotrain-acronym-identification-7324788", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
BenTata-86/distilbert-base-turkish-cased-finetuned-emotion
|
BenTata-86
| 2022-08-25T12:54:54Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:turkish-multiclass-dataset",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-25T11:45:25Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- turkish-multiclass-dataset
metrics:
- f1
model-index:
- name: distilbert-base-turkish-cased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: turkish-multiclass-dataset
type: turkish-multiclass-dataset
config: TurkishMulticlassDataset
split: train
args: TurkishMulticlassDataset
metrics:
- name: F1
type: f1
value:
f1: 0.8276613385259164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-turkish-cased-finetuned-emotion
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the turkish-multiclass-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4861
- F1: {'f1': 0.8276613385259164}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------:|
| 0.2578 | 1.0 | 313 | 0.5459 | {'f1': 0.8212239281513611} |
| 0.381 | 2.0 | 626 | 0.4861 | {'f1': 0.8276613385259164} |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dav3794/demo_knots_1_8
|
dav3794
| 2022-08-25T12:20:03Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:dav3794/autotrain-data-demo-knots_1_8",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-25T12:13:15Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dav3794/autotrain-data-demo-knots_1_8
co2_eq_emissions:
emissions: 0.06357782150508624
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1316050278
- CO2 Emissions (in grams): 0.0636
## Validation Metrics
- Loss: 0.242
- Accuracy: 0.931
- Precision: 0.943
- Recall: 0.981
- AUC: 0.852
- F1: 0.962
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots_1_8-1316050278
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots_1_8-1316050278", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots_1_8-1316050278", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
dav3794/demo_knots_12_error
|
dav3794
| 2022-08-25T11:39:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:dav3794/autotrain-data-demo-knots-1-2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-25T11:37:05Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dav3794/autotrain-data-demo-knots-1-2
co2_eq_emissions:
emissions: 0.019866640922183956
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1315950270
- CO2 Emissions (in grams): 0.0199
## Validation Metrics
- Loss: 0.396
- Accuracy: 0.792
- Precision: 0.915
- Recall: 0.652
- AUC: 0.900
- F1: 0.761
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots-1-2-1315950270
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots-1-2-1315950270", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots-1-2-1315950270", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
dav3794/demo_knots_all
|
dav3794
| 2022-08-25T11:21:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:dav3794/autotrain-data-demo-knots-all",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-25T11:08:10Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- dav3794/autotrain-data-demo-knots-all
co2_eq_emissions:
emissions: 0.1285808899475734
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1315850267
- CO2 Emissions (in grams): 0.1286
## Validation Metrics
- Loss: 0.085
- Accuracy: 0.982
- Precision: 0.984
- Recall: 0.997
- AUC: 0.761
- F1: 0.991
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dav3794/autotrain-demo-knots-all-1315850267
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dav3794/autotrain-demo-knots-all-1315850267", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dav3794/autotrain-demo-knots-all-1315850267", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
muhtasham/bert-small-finetuned-ner-to-multilabel-finer-19
|
muhtasham
| 2022-08-25T09:39:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-25T09:32:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-ner-to-multilabel-finer-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-ner-to-multilabel-finer-19
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.208 | 0.03 | 500 | 0.1137 |
| 0.1026 | 0.06 | 1000 | 0.1170 |
| 0.0713 | 0.1 | 1500 | 0.1301 |
| 0.0567 | 0.13 | 2000 | 0.1389 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
silviacamplani/distilbert-finetuned-ner-ai
|
silviacamplani
| 2022-08-25T07:40:11Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-25T07:36:43Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: silviacamplani/distilbert-finetuned-ner-ai
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-finetuned-ner-ai
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8962
- Validation Loss: 0.9088
- Train Precision: 0.3895
- Train Recall: 0.3901
- Train F1: 0.3898
- Train Accuracy: 0.7558
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 350, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 2.5761 | 1.7934 | 0.0 | 0.0 | 0.0 | 0.6480 | 0 |
| 1.7098 | 1.5860 | 0.0 | 0.0 | 0.0 | 0.6480 | 1 |
| 1.4692 | 1.3213 | 0.0 | 0.0 | 0.0 | 0.6480 | 2 |
| 1.2755 | 1.1859 | 0.1154 | 0.0460 | 0.0658 | 0.6789 | 3 |
| 1.1561 | 1.0921 | 0.2878 | 0.2010 | 0.2367 | 0.7192 | 4 |
| 1.0652 | 1.0170 | 0.3250 | 0.2862 | 0.3043 | 0.7354 | 5 |
| 0.9936 | 0.9649 | 0.3489 | 0.3305 | 0.3395 | 0.7462 | 6 |
| 0.9442 | 0.9340 | 0.3845 | 0.3799 | 0.3822 | 0.7549 | 7 |
| 0.9097 | 0.9168 | 0.3866 | 0.3748 | 0.3806 | 0.7556 | 8 |
| 0.8962 | 0.9088 | 0.3895 | 0.3901 | 0.3898 | 0.7558 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
noob123/autotrain-app_review_train_dilbert-1314250179
|
noob123
| 2022-08-25T04:43:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"unk",
"dataset:noob123/autotrain-data-app_review_train_dilbert",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-25T04:42:31Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- noob123/autotrain-data-app_review_train_dilbert
co2_eq_emissions:
emissions: 0.004444293595896442
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1314250179
- CO2 Emissions (in grams): 0.0044
## Validation Metrics
- Loss: 0.447
- Accuracy: 0.809
- Precision: 0.857
- Recall: 0.855
- AUC: 0.857
- F1: 0.856
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/noob123/autotrain-app_review_train_dilbert-1314250179
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("noob123/autotrain-app_review_train_dilbert-1314250179", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("noob123/autotrain-app_review_train_dilbert-1314250179", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
jcgarciaca/dqn-SpaceInvadersNoFrameskip-v4
|
jcgarciaca
| 2022-08-25T03:22:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-25T03:22:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 779.00 +/- 179.64
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jcgarciaca -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jcgarciaca
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
morganchen1007/swin-tiny-patch4-window7-224-finetuned-eurosat
|
morganchen1007
| 2022-08-25T01:34:28Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-23T08:30:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9341978866474544
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1507
- Accuracy: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2891 | 1.0 | 146 | 0.2322 | 0.9068 |
| 0.2609 | 2.0 | 292 | 0.1710 | 0.9227 |
| 0.2417 | 3.0 | 438 | 0.1830 | 0.9251 |
| 0.2406 | 4.0 | 584 | 0.1809 | 0.9198 |
| 0.2113 | 5.0 | 730 | 0.1631 | 0.9289 |
| 0.1812 | 6.0 | 876 | 0.1561 | 0.9308 |
| 0.2082 | 7.0 | 1022 | 0.1507 | 0.9342 |
| 0.1922 | 8.0 | 1168 | 0.1611 | 0.9294 |
| 0.1715 | 9.0 | 1314 | 0.1536 | 0.9308 |
| 0.1675 | 10.0 | 1460 | 0.1609 | 0.9289 |
| 0.194 | 11.0 | 1606 | 0.1499 | 0.9337 |
| 0.1706 | 12.0 | 1752 | 0.1514 | 0.9323 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ZhiyuanQiu/camembert-base-finetuned-Train_RAW10-dd
|
ZhiyuanQiu
| 2022-08-25T01:21:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-25T00:06:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: camembert-base-finetuned-Train_RAW10-dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-Train_RAW10-dd
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2175
- Precision: 0.8744
- Recall: 0.9056
- F1: 0.8897
- Accuracy: 0.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1873 | 1.0 | 9930 | 0.2088 | 0.8652 | 0.8927 | 0.8788 | 0.9326 |
| 0.1533 | 2.0 | 19860 | 0.2175 | 0.8744 | 0.9056 | 0.8897 | 0.9357 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
MBMMurad/wav2vec2_murad_with_some_new_data
|
MBMMurad
| 2022-08-24T23:33:11Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:cvbn",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-24T05:29:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cvbn
model-index:
- name: wav2vec2_murad_with_some_new_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_murad_with_some_new_data
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2971
- eval_wer: 0.2084
- eval_runtime: 511.5492
- eval_samples_per_second: 9.774
- eval_steps_per_second: 0.612
- epoch: 26.88
- step: 33600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
MBMMurad/wav2vec2_imtiaz
|
MBMMurad
| 2022-08-24T21:33:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:cvbn",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-21T12:53:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cvbn
model-index:
- name: wav2vec2_imtiaz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_imtiaz
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1956
- eval_wer: 0.2202
- eval_runtime: 574.912
- eval_samples_per_second: 8.697
- eval_steps_per_second: 0.544
- epoch: 9.41
- step: 22000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dvalbuena1/Reinforce-Pixelcopter
|
dvalbuena1
| 2022-08-24T20:51:51Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-24T20:51:41Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.70 +/- 7.89
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
noob123/autotrain-app_review_bert_train-1310050094
|
noob123
| 2022-08-24T20:30:47Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:noob123/autotrain-data-app_review_bert_train",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-24T20:28:47Z |
---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- noob123/autotrain-data-app_review_bert_train
co2_eq_emissions:
emissions: 4.094086460501482
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1310050094
- CO2 Emissions (in grams): 4.0941
## Validation Metrics
- Loss: 0.449
- Accuracy: 0.800
- Precision: 0.855
- Recall: 0.844
- AUC: 0.851
- F1: 0.849
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/noob123/autotrain-app_review_bert_train-1310050094
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("noob123/autotrain-app_review_bert_train-1310050094", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("noob123/autotrain-app_review_bert_train-1310050094", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
dvalbuena1/Reinforce-CartPole
|
dvalbuena1
| 2022-08-24T18:41:30Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-24T18:39:32Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 209.20 +/- 17.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
dboshardy/ddpm-butterflies-128
|
dboshardy
| 2022-08-24T18:40:28Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-24T17:51:08Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/dboshardy/ddpm-butterflies-128/tensorboard?#scalars)
|
Aimlab/Roberta-Base-NER
|
Aimlab
| 2022-08-24T18:12:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T17:34:06Z |
---
widget:
- text: "سبحان کی لاہور سے کوئٹہ کی فلائٹ ہے"
example_title: "Example 1"
---
|
ericntay/mlm_gh_issues
|
ericntay
| 2022-08-24T17:26:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-24T16:07:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mlm_gh_issues
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mlm_gh_issues
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.099 | 1.0 | 291 | 1.6946 |
| 1.6396 | 2.0 | 582 | 1.4288 |
| 1.4875 | 3.0 | 873 | 1.3893 |
| 1.399 | 4.0 | 1164 | 1.3812 |
| 1.341 | 5.0 | 1455 | 1.2004 |
| 1.2803 | 6.0 | 1746 | 1.2738 |
| 1.2397 | 7.0 | 2037 | 1.2645 |
| 1.199 | 8.0 | 2328 | 1.2092 |
| 1.166 | 9.0 | 2619 | 1.1871 |
| 1.1406 | 10.0 | 2910 | 1.2244 |
| 1.1293 | 11.0 | 3201 | 1.2061 |
| 1.1037 | 12.0 | 3492 | 1.1621 |
| 1.0824 | 13.0 | 3783 | 1.2540 |
| 1.0738 | 14.0 | 4074 | 1.1703 |
| 1.0625 | 15.0 | 4365 | 1.1195 |
| 1.0628 | 16.0 | 4656 | 1.2449 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mmillet/distilrubert-tiny-cased-conversational-v1_finetuned_empathy_classifier
|
mmillet
| 2022-08-24T17:07:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-24T17:05:02Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilrubert-tiny-cased-conversational-v1_finetuned_empathy_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilrubert-tiny-cased-conversational-v1_finetuned_empathy_classifier
This model is a fine-tuned version of [DeepPavlov/distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6624
- Accuracy: 0.6780
- F1: 0.6878
- Precision: 0.7175
- Recall: 0.6780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=0.0001
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.09 | 1.0 | 9 | 1.0661 | 0.4407 | 0.4464 | 0.6498 | 0.4407 |
| 1.0292 | 2.0 | 18 | 0.9658 | 0.5678 | 0.5223 | 0.5179 | 0.5678 |
| 0.942 | 3.0 | 27 | 0.8659 | 0.5932 | 0.5807 | 0.5723 | 0.5932 |
| 0.8614 | 4.0 | 36 | 0.7864 | 0.6186 | 0.5924 | 0.5879 | 0.6186 |
| 0.8002 | 5.0 | 45 | 0.7766 | 0.6017 | 0.5946 | 0.6086 | 0.6017 |
| 0.7633 | 6.0 | 54 | 0.7545 | 0.6186 | 0.6022 | 0.6151 | 0.6186 |
| 0.7249 | 7.0 | 63 | 0.7649 | 0.6356 | 0.6381 | 0.6921 | 0.6356 |
| 0.6687 | 8.0 | 72 | 0.7115 | 0.6695 | 0.6741 | 0.7154 | 0.6695 |
| 0.6426 | 9.0 | 81 | 0.6554 | 0.6864 | 0.6761 | 0.6807 | 0.6864 |
| 0.6144 | 10.0 | 90 | 0.6649 | 0.6864 | 0.6909 | 0.7172 | 0.6864 |
| 0.6252 | 11.0 | 99 | 0.8685 | 0.6186 | 0.6118 | 0.6880 | 0.6186 |
| 0.5988 | 12.0 | 108 | 0.6306 | 0.6949 | 0.7015 | 0.7107 | 0.6949 |
| 0.56 | 13.0 | 117 | 0.6919 | 0.6610 | 0.6662 | 0.7061 | 0.6610 |
| 0.5468 | 14.0 | 126 | 0.6563 | 0.6949 | 0.6980 | 0.7188 | 0.6949 |
| 0.5658 | 15.0 | 135 | 0.6351 | 0.6949 | 0.7048 | 0.7280 | 0.6949 |
| 0.5262 | 16.0 | 144 | 0.6902 | 0.6780 | 0.6821 | 0.7173 | 0.6780 |
| 0.4777 | 17.0 | 153 | 0.6237 | 0.6949 | 0.6981 | 0.7056 | 0.6949 |
| 0.4771 | 18.0 | 162 | 0.6688 | 0.6780 | 0.6799 | 0.7035 | 0.6780 |
| 0.4737 | 19.0 | 171 | 0.6482 | 0.6864 | 0.6957 | 0.7219 | 0.6864 |
| 0.5033 | 20.0 | 180 | 0.6624 | 0.6780 | 0.6878 | 0.7175 | 0.6780 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ROBERTaCoder/wav2vec2-base-timit-demo-google-colab
|
ROBERTaCoder
| 2022-08-24T17:07:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-24T11:17:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5452
- Wer: 0.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5557 | 1.0 | 500 | 1.9362 | 1.0072 |
| 0.867 | 2.01 | 1000 | 0.5197 | 0.5173 |
| 0.4281 | 3.01 | 1500 | 0.4609 | 0.4552 |
| 0.3002 | 4.02 | 2000 | 0.4066 | 0.4129 |
| 0.2252 | 5.02 | 2500 | 0.4122 | 0.3952 |
| 0.1857 | 6.02 | 3000 | 0.4650 | 0.3990 |
| 0.1541 | 7.03 | 3500 | 0.4784 | 0.3834 |
| 0.1372 | 8.03 | 4000 | 0.3875 | 0.3805 |
| 0.1213 | 9.04 | 4500 | 0.5606 | 0.4002 |
| 0.1043 | 10.04 | 5000 | 0.4713 | 0.3762 |
| 0.0972 | 11.04 | 5500 | 0.4770 | 0.3692 |
| 0.0876 | 12.05 | 6000 | 0.4755 | 0.3671 |
| 0.0812 | 13.05 | 6500 | 0.4854 | 0.3616 |
| 0.0705 | 14.06 | 7000 | 0.4380 | 0.3659 |
| 0.0759 | 15.06 | 7500 | 0.5025 | 0.3516 |
| 0.0709 | 16.06 | 8000 | 0.5310 | 0.3577 |
| 0.0572 | 17.07 | 8500 | 0.5097 | 0.3561 |
| 0.0572 | 18.07 | 9000 | 0.5150 | 0.3510 |
| 0.0482 | 19.08 | 9500 | 0.4954 | 0.3488 |
| 0.0703 | 20.08 | 10000 | 0.5279 | 0.3512 |
| 0.0457 | 21.08 | 10500 | 0.5336 | 0.3459 |
| 0.036 | 22.09 | 11000 | 0.5471 | 0.3440 |
| 0.0368 | 23.09 | 11500 | 0.5109 | 0.3417 |
| 0.0342 | 24.1 | 12000 | 0.5506 | 0.3415 |
| 0.0318 | 25.1 | 12500 | 0.5291 | 0.3357 |
| 0.03 | 26.1 | 13000 | 0.5347 | 0.3363 |
| 0.026 | 27.11 | 13500 | 0.5475 | 0.3318 |
| 0.0232 | 28.11 | 14000 | 0.5628 | 0.3332 |
| 0.0246 | 29.12 | 14500 | 0.5452 | 0.3296 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ZhiyuanQiu/camembert-base-finetuned-RAW20-dd
|
ZhiyuanQiu
| 2022-08-24T16:48:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T13:34:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: camembert-base-finetuned-RAW20-dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-finetuned-RAW20-dd
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4309
- Precision: 0.8706
- Recall: 0.8429
- F1: 0.8565
- Accuracy: 0.9926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.136 | 1.0 | 9942 | 0.4309 | 0.8706 | 0.8429 | 0.8565 | 0.9926 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Ayushb/roberta-base-ft-esg
|
Ayushb
| 2022-08-24T16:47:17Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-22T14:18:12Z |
# ESG Question Answering
A NLP service to identify Emission Reduction Targets and Mechanisms of various companies from their ESG disclosure or Annual Reports.
This Roberta-Base model has been finetuned on a very small sample (manually annotated). A lot of companies have clear mentions of targets & goals than methodologies which is why it can currently identify only targets & goals more precisely.
## Authors
- [@Ayush Bhosle](https://www.github.com/Ayush1702)
|
IbrahimMavus/ddpm-butterflies-129
|
IbrahimMavus
| 2022-08-24T16:26:43Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-24T15:08:11Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-129
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/IbrahimMavus/ddpm-butterflies-129/tensorboard?#scalars)
|
HYM/bert-base-chinese-ws-finetuned-ner_all
|
HYM
| 2022-08-24T15:49:34Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T15:01:38Z |
---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-ws-finetuned-ner_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-ws-finetuned-ner_all
This model is a fine-tuned version of [ckiplab/bert-base-chinese-ws](https://huggingface.co/ckiplab/bert-base-chinese-ws) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0330
- Precision: 0.9723
- Recall: 0.9734
- F1: 0.9728
- Accuracy: 0.9879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 18
- eval_batch_size: 18
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0648 | 0.29 | 500 | 0.0524 | 0.9586 | 0.9572 | 0.9579 | 0.9813 |
| 0.0509 | 0.59 | 1000 | 0.0460 | 0.9615 | 0.9628 | 0.9622 | 0.9832 |
| 0.0478 | 0.88 | 1500 | 0.0429 | 0.9624 | 0.9660 | 0.9642 | 0.9840 |
| 0.0417 | 1.17 | 2000 | 0.0409 | 0.9650 | 0.9680 | 0.9665 | 0.9851 |
| 0.0402 | 1.47 | 2500 | 0.0387 | 0.9662 | 0.9693 | 0.9677 | 0.9856 |
| 0.0378 | 1.76 | 3000 | 0.0359 | 0.9699 | 0.9717 | 0.9708 | 0.9869 |
| 0.0385 | 2.05 | 3500 | 0.0353 | 0.9703 | 0.9718 | 0.9710 | 0.9871 |
| 0.0337 | 2.34 | 4000 | 0.0341 | 0.9709 | 0.9731 | 0.9720 | 0.9875 |
| 0.0348 | 2.64 | 4500 | 0.0333 | 0.9721 | 0.9733 | 0.9727 | 0.9878 |
| 0.0346 | 2.93 | 5000 | 0.0331 | 0.9722 | 0.9735 | 0.9729 | 0.9879 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.8.0+cu111
- Datasets 2.4.0
- Tokenizers 0.10.3
|
cemilcelik/ppo-LunarLander-v2
|
cemilcelik
| 2022-08-24T15:47:48Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-23T16:34:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 260.94 +/- 23.31
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
cataluna84/pegasus-samsum
|
cataluna84
| 2022-08-24T15:37:26Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-24T14:09:03Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6902 | 0.54 | 500 | 1.4884 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.10.3
|
jellicott/bert-finetuned-ner
|
jellicott
| 2022-08-24T15:20:59Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T14:57:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9347431025937551
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9433930804501875
- name: Accuracy
type: accuracy
value: 0.9868870312591982
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0619
- Precision: 0.9347
- Recall: 0.9522
- F1: 0.9434
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0847 | 1.0 | 1756 | 0.0696 | 0.9086 | 0.9281 | 0.9182 | 0.9817 |
| 0.0338 | 2.0 | 3512 | 0.0601 | 0.9249 | 0.9492 | 0.9369 | 0.9861 |
| 0.0173 | 3.0 | 5268 | 0.0619 | 0.9347 | 0.9522 | 0.9434 | 0.9869 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
geverend/GoldenCircle
|
geverend
| 2022-08-24T15:04:49Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2022-08-24T15:04:09Z |
---
license: cc-by-nc-4.0
---
Golden Circle of Floating Perfection A Halo Called Fred Steampunk
|
Jeolnighty/sen
|
Jeolnighty
| 2022-08-24T12:29:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-24T12:28:00Z |
blackpink logosu el üzerinde dursun
|
amrahmed/a2c-AntBulletEnv-v0
|
amrahmed
| 2022-08-24T12:13:31Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-24T12:12:18Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1405.41 +/- 291.66
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NilsDamAi/nils-nl-to-rx-pt-v4
|
NilsDamAi
| 2022-08-24T11:48:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-08-24T11:40:49Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: nils-nl-to-rx-pt-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nils-nl-to-rx-pt-v4
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8665 | 1.0 | 542 | 0.5641 |
| 0.7292 | 2.0 | 1084 | 0.3749 |
| 0.5665 | 3.0 | 1626 | 0.3352 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
vinrougeed/ddpm-butterflies-128
|
vinrougeed
| 2022-08-24T11:47:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-24T11:02:31Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/vinrougeed/ddpm-butterflies-128/tensorboard?#scalars)
|
Chandanab/beit-base-patch16-224-pt22k-finetuned-eurosat
|
Chandanab
| 2022-08-24T11:24:43Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-09T14:03:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: beit-base-patch16-224-pt22k-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8585858585858586
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-finetuned-eurosat
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3045
- Accuracy: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 0.5181 | 0.7071 |
| 0.6727 | 2.0 | 14 | 0.4030 | 0.8182 |
| 0.3522 | 3.0 | 21 | 0.3045 | 0.8586 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cpu
- Datasets 2.2.0
- Tokenizers 0.12.1
|
KISSz/wav2vec2-vee-demo-colab
|
KISSz
| 2022-08-24T10:48:12Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-23T02:38:23Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model_index:
name: wav2vec2-vee-demo-colab
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-vee-demo-colab
This model is a fine-tuned version of [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cpu
- Datasets 1.11.0
- Tokenizers 0.10.3
|
Neha2608/distilbert-base-uncased-finetuned-news-category
|
Neha2608
| 2022-08-24T10:30:36Z | 135 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-23T15:15:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name: distilbert-base-uncased-finetuned-news-category
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-news-category
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
komo0628/1
|
komo0628
| 2022-08-24T09:58:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-24T09:51:42Z |
---
license: afl-3.0
kawaii
FAZER
|
hieule/distilbert-base-uncased-scratch
|
hieule
| 2022-08-24T09:38:38Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-24T08:21:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-scratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-scratch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.388 | 1.0 | 157 | 7.3651 |
| 6.9902 | 2.0 | 314 | 6.7300 |
| 6.659 | 3.0 | 471 | 6.6304 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Blind2015/distilbert-base-uncased-finetuned-cola
|
Blind2015
| 2022-08-24T09:38:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-24T09:25:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5188671521382517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7846
- Matthews Correlation: 0.5189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5255 | 1.0 | 535 | 0.5268 | 0.4065 |
| 0.3485 | 2.0 | 1070 | 0.4967 | 0.4848 |
| 0.2313 | 3.0 | 1605 | 0.5556 | 0.5105 |
| 0.1775 | 4.0 | 2140 | 0.7846 | 0.5189 |
| 0.1276 | 5.0 | 2675 | 0.8429 | 0.5154 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RyanQin/k2j
|
RyanQin
| 2022-08-24T09:26:54Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytojoke",
"k2j",
"Keywords to Jokes",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-24T09:19:46Z |
---
language: "en"
thumbnail: "Keywords to Jokes"
tags:
- keytojoke
- k2j
- Keywords to Jokes
license: mit
---
Idea is to build a model which will take keywords as inputs and generate jokes as outputs.
Potential use case can include:
- joke generator
- meme generator
|
chintagunta85/electramed-small-JNLPBA-ner
|
chintagunta85
| 2022-08-24T09:14:43Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:jnlpba",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T08:43:59Z |
---
tags:
- generated_from_trainer
datasets:
- jnlpba
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-JNLPBA-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: jnlpba
type: jnlpba
config: jnlpba
split: train
args: jnlpba
metrics:
- name: Precision
type: precision
value: 0.8224512128396863
- name: Recall
type: recall
value: 0.878188899707887
- name: F1
type: f1
value: 0.8494066679223958
- name: Accuracy
type: accuracy
value: 0.9620705451213926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-JNLPBA-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the jnlpba dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1167
- Precision: 0.8225
- Recall: 0.8782
- F1: 0.8494
- Accuracy: 0.9621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.398 | 1.0 | 2087 | 0.1941 | 0.7289 | 0.7936 | 0.7599 | 0.9441 |
| 0.0771 | 2.0 | 4174 | 0.1542 | 0.7734 | 0.8348 | 0.8029 | 0.9514 |
| 0.1321 | 3.0 | 6261 | 0.1413 | 0.7890 | 0.8492 | 0.8180 | 0.9546 |
| 0.2302 | 4.0 | 8348 | 0.1326 | 0.8006 | 0.8589 | 0.8287 | 0.9562 |
| 0.0723 | 5.0 | 10435 | 0.1290 | 0.7997 | 0.8715 | 0.8340 | 0.9574 |
| 0.171 | 6.0 | 12522 | 0.1246 | 0.8115 | 0.8722 | 0.8408 | 0.9593 |
| 0.1058 | 7.0 | 14609 | 0.1204 | 0.8148 | 0.8757 | 0.8441 | 0.9604 |
| 0.1974 | 8.0 | 16696 | 0.1178 | 0.8181 | 0.8779 | 0.8470 | 0.9614 |
| 0.0663 | 9.0 | 18783 | 0.1168 | 0.8239 | 0.8781 | 0.8501 | 0.9620 |
| 0.1022 | 10.0 | 20870 | 0.1167 | 0.8225 | 0.8782 | 0.8494 | 0.9621 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/dadjokeapibot
|
huggingtweets
| 2022-08-24T08:04:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-23T18:53:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dadjokeapibot/1661328249695/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1122922224820813824/z9zE604m_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Dad Joke Bot</div>
<div style="text-align: center; font-size: 14px;">@dadjokeapibot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Dad Joke Bot.
| Data | Dad Joke Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2te5z2ku/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dadjokeapibot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3igw9rw9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3igw9rw9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dadjokeapibot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ASCCCCCCCC/PENGMENGJIE-finetuned-bill-classification
|
ASCCCCCCCC
| 2022-08-24T07:09:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-19T02:25:37Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: PENGMENGJIE-finetuned-bill-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-bill-classification
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
- Accuracy: 1.0
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
| 0.0017 | 1.0 | 1250 | 0.0006 | 1.0 | 1.0 |
| 0.0005 | 2.0 | 2500 | 0.0003 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
shwan/readme_test
|
shwan
| 2022-08-24T06:57:15Z | 0 | 0 | null |
[
"korean",
"klue",
"summarization",
"ko",
"dataset:c4",
"license:apache-2.0",
"region:us"
] |
summarization
| 2022-08-24T06:49:32Z |
---
language: ko
tags:
- korean
- klue
- summarization
datasets:
- c4
license: apache-2.0
---
# KoMiniLM
🐣 Korean mini language model
## Overview
Current language models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this project, we release a light weight korean language model to address the aforementioned shortcomings of existing language models.
## Quick tour
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("BM-K/KoMiniLM") # 23M model
model = AutoModel.from_pretrained("BM-K/KoMiniLM")
inputs = tokenizer("안녕 세상아!", return_tensors="pt")
outputs = model(**inputs)
```
## Update history
** Updates on 2022.06.20 **
- Release KoMiniLM-bert-68M
** Updates on 2022.05.24 **
- Release KoMiniLM-bert-23M
## Pre-training
`Teacher Model`: [KLUE-BERT(base)](https://github.com/KLUE-benchmark/KLUE)
### Object
Self-Attention Distribution and Self-Attention Value-Relation [[Wang et al., 2020]] were distilled from each discrete layer of the teacher model to the student model. Wang et al. distilled in the last layer of the transformer, but that was not the case in this project.
### Data sets
|Data|News comments|News article|
|:----:|:----:|:----:|
|size|10G|10G|
### Config
- **KoMiniLM-23M**
```json
{
"architectures": [
"BartForPreTraining"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"initializer_range": 0.02,
"intermediate_size": 1536,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bart",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"output_attentions": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"return_dict": false,
"torch_dtype": "float32",
"transformers_version": "4.13.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 32000
}
```
### Performance on subtasks
- The results of our fine-tuning experiments are an average of 3 runs for each task.
```
cd KoMiniLM-Finetune
bash scripts/run_all_kominilm.sh
```
|| #Param | Average | NSMC<br>(Acc) | Naver NER<br>(F1) | PAWS<br>(Acc) | KorNLI<br>(Acc) | KorSTS<br>(Spearman) | Question Pair<br>(Acc) | KorQuaD<br>(Dev)<br>(EM/F1) |
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|KoBERT(KLUE)| 110M | 86.84 | 90.20±0.07 | 87.11±0.05 | 81.36±0.21 | 81.06±0.33 | 82.47±0.14 | 95.03±0.44 | 84.43±0.18 / <br>93.05±0.04 |
|KcBERT| 108M | 78.94 | 89.60±0.10 | 84.34±0.13 | 67.02±0.42| 74.17±0.52 | 76.57±0.51 | 93.97±0.27 | 60.87±0.27 / <br>85.01±0.14 |
|KoBERT(SKT)| 92M | 79.73 | 89.28±0.42 | 87.54±0.04 | 80.93±0.91 | 78.18±0.45 | 75.98±2.81 | 94.37±0.31 | 51.94±0.60 / <br>79.69±0.66 |
|DistilKoBERT| 28M | 74.73 | 88.39±0.08 | 84.22±0.01 | 61.74±0.45 | 70.22±0.14 | 72.11±0.27 | 92.65±0.16 | 52.52±0.48 / <br>76.00±0.71 |
| | | | | | | | | |
|**KoMiniLM<sup>†</sup>**| **68M** | 85.90 | 89.84±0.02 | 85.98±0.09 | 80.78±0.30 | 79.28±0.17 | 81.00±0.07 | 94.89±0.37 | 83.27±0.08 / <br>92.08±0.06 |
|**KoMiniLM<sup>†</sup>**| **23M** | 84.79 | 89.67±0.03 | 84.79±0.09 | 78.67±0.45 | 78.10±0.07 | 78.90±0.11 | 94.81±0.12 | 82.11±0.42 / <br>91.21±0.29 |
- [NSMC](https://github.com/e9t/nsmc) (Naver Sentiment Movie Corpus)
- [Naver NER](https://github.com/naver/nlp-challenge) (NER task on Naver NLP Challenge 2018)
- [PAWS](https://github.com/google-research-datasets/paws) (Korean Paraphrase Adversaries from Word Scrambling)
- [KorNLI/KorSTS](https://github.com/kakaobrain/KorNLUDatasets) (Korean Natural Language Understanding)
- [Question Pair](https://github.com/songys/Question_pair) (Paired Question)
- [KorQuAD](https://korquad.github.io/) (The Korean Question Answering Dataset)
<img src = "https://user-images.githubusercontent.com/55969260/174229747-279122dc-9d27-4da9-a6e7-f9f1fe1651f7.png"> <br>
### User Contributed Examples
-
## Reference
- [KLUE BERT](https://github.com/KLUE-benchmark/KLUE)
- [KcBERT](https://github.com/Beomi/KcBERT)
- [SKT KoBERT](https://github.com/SKTBrain/KoBERT)
- [DistilKoBERT](https://github.com/monologg/DistilKoBERT)
- [lassl](https://github.com/lassl/lassl)
|
chintagunta85/electramed-small-SPECIES800-ner
|
chintagunta85
| 2022-08-24T06:39:16Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:species_800",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T06:32:07Z |
---
tags:
- generated_from_trainer
datasets:
- species_800
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-SPECIES800-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: species_800
type: species_800
config: species_800
split: train
args: species_800
metrics:
- name: Precision
type: precision
value: 0.6221498371335505
- name: Recall
type: recall
value: 0.7470664928292047
- name: F1
type: f1
value: 0.6789099526066352
- name: Accuracy
type: accuracy
value: 0.9831434110359828
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-SPECIES800-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the species_800 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0513
- Precision: 0.6221
- Recall: 0.7471
- F1: 0.6789
- Accuracy: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0536 | 1.0 | 359 | 0.0971 | 0.6138 | 0.5554 | 0.5832 | 0.9795 |
| 0.0309 | 2.0 | 718 | 0.0692 | 0.6175 | 0.6063 | 0.6118 | 0.9808 |
| 0.0563 | 3.0 | 1077 | 0.0582 | 0.6424 | 0.6910 | 0.6658 | 0.9819 |
| 0.0442 | 4.0 | 1436 | 0.0553 | 0.5900 | 0.7523 | 0.6613 | 0.9814 |
| 0.0069 | 5.0 | 1795 | 0.0511 | 0.6291 | 0.7497 | 0.6841 | 0.9827 |
| 0.0141 | 6.0 | 2154 | 0.0505 | 0.6579 | 0.7471 | 0.6996 | 0.9837 |
| 0.0052 | 7.0 | 2513 | 0.0513 | 0.5965 | 0.7458 | 0.6628 | 0.9826 |
| 0.0573 | 8.0 | 2872 | 0.0509 | 0.6140 | 0.7445 | 0.6730 | 0.9828 |
| 0.0203 | 9.0 | 3231 | 0.0516 | 0.6118 | 0.7458 | 0.6722 | 0.9830 |
| 0.0101 | 10.0 | 3590 | 0.0513 | 0.6221 | 0.7471 | 0.6789 | 0.9831 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
chintagunta85/electramed-small-BC4CHEMD-ner
|
chintagunta85
| 2022-08-24T05:44:59Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:bc4chemd",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T05:07:03Z |
---
tags:
- generated_from_trainer
datasets:
- bc4chemd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-BC4CHEMD-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: bc4chemd
type: bc4chemd
config: bc4chemd
split: train
args: bc4chemd
metrics:
- name: Precision
type: precision
value: 0.7715624436835465
- name: Recall
type: recall
value: 0.6760888102832959
- name: F1
type: f1
value: 0.7206773498518718
- name: Accuracy
type: accuracy
value: 0.9770623458780496
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-BC4CHEMD-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the bc4chemd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0655
- Precision: 0.7716
- Recall: 0.6761
- F1: 0.7207
- Accuracy: 0.9771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0882 | 1.0 | 1918 | 0.1058 | 0.6615 | 0.3942 | 0.4940 | 0.9635 |
| 0.0555 | 2.0 | 3836 | 0.0820 | 0.7085 | 0.5133 | 0.5954 | 0.9689 |
| 0.0631 | 3.0 | 5754 | 0.0769 | 0.6892 | 0.5743 | 0.6266 | 0.9699 |
| 0.0907 | 4.0 | 7672 | 0.0682 | 0.7623 | 0.5923 | 0.6666 | 0.9740 |
| 0.0313 | 5.0 | 9590 | 0.0675 | 0.7643 | 0.6223 | 0.6860 | 0.9749 |
| 0.0306 | 6.0 | 11508 | 0.0662 | 0.7654 | 0.6398 | 0.6970 | 0.9754 |
| 0.0292 | 7.0 | 13426 | 0.0656 | 0.7694 | 0.6552 | 0.7077 | 0.9763 |
| 0.1025 | 8.0 | 15344 | 0.0658 | 0.7742 | 0.6687 | 0.7176 | 0.9769 |
| 0.0394 | 9.0 | 17262 | 0.0662 | 0.7741 | 0.6731 | 0.7201 | 0.9770 |
| 0.0378 | 10.0 | 19180 | 0.0655 | 0.7716 | 0.6761 | 0.7207 | 0.9771 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
hhffxx/distilbert-base-uncased-finetuned-emotion
|
hhffxx
| 2022-08-24T02:29:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-12T06:49:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9385
- name: F1
type: f1
value: 0.9382234767195092
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9385
- F1: 0.9382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.5485 | 1.0 | 16000 | 0.3088 | 0.933 | 0.9322 |
| 0.2384 | 2.0 | 32000 | 0.2716 | 0.9385 | 0.9382 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
farleyknight/vit-base-roman-numeral
|
farleyknight
| 2022-08-24T02:23:03Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-24T02:13:16Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-roman-numeral
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: farleyknight/roman_numerals
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8308823529411765
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-roman-numeral
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the farleyknight/roman_numerals dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6891
- Accuracy: 0.8309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9053 | 1.0 | 289 | 1.3241 | 0.7108 |
| 1.3293 | 2.0 | 578 | 0.9333 | 0.7892 |
| 1.1251 | 3.0 | 867 | 0.7989 | 0.7843 |
| 0.9837 | 4.0 | 1156 | 0.6956 | 0.8186 |
| 0.999 | 5.0 | 1445 | 0.6891 | 0.8309 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.4.0
- Tokenizers 0.12.1
|
zzj0402/distilbert-base-uncased-finetuned-imdb
|
zzj0402
| 2022-08-24T02:07:57Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-24T02:00:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nguyenkhoa2407/gpt2-NER-favsbot
|
nguyenkhoa2407
| 2022-08-24T01:43:23Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:favsbot",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-24T01:24:41Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- favsbot
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: gpt2-NER-favsbot
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: favsbot
type: favsbot
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.3782051282051282
- name: Recall
type: recall
value: 0.3277777777777778
- name: F1
type: f1
value: 0.3511904761904762
- name: Accuracy
type: accuracy
value: 0.5597189695550351
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-NER-favsbot
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5146
- Precision: 0.3782
- Recall: 0.3278
- F1: 0.3512
- Accuracy: 0.5597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 4.0808 | 0.0630 | 0.0444 | 0.0521 | 0.0773 |
| No log | 2.0 | 8 | 3.5205 | 0.0946 | 0.0778 | 0.0854 | 0.1077 |
| No log | 3.0 | 12 | 3.0413 | 0.0909 | 0.0722 | 0.0805 | 0.2084 |
| No log | 4.0 | 16 | 2.6817 | 0.0615 | 0.0444 | 0.0516 | 0.2740 |
| No log | 5.0 | 20 | 2.4227 | 0.1140 | 0.0722 | 0.0884 | 0.3560 |
| No log | 6.0 | 24 | 2.2422 | 0.1382 | 0.0944 | 0.1122 | 0.3770 |
| No log | 7.0 | 28 | 2.0941 | 0.1654 | 0.1222 | 0.1406 | 0.3864 |
| No log | 8.0 | 32 | 1.9726 | 0.2344 | 0.1667 | 0.1948 | 0.4309 |
| No log | 9.0 | 36 | 1.8916 | 0.2925 | 0.1722 | 0.2168 | 0.4543 |
| No log | 10.0 | 40 | 1.8321 | 0.31 | 0.1722 | 0.2214 | 0.4660 |
| No log | 11.0 | 44 | 1.7697 | 0.2957 | 0.1889 | 0.2305 | 0.4707 |
| No log | 12.0 | 48 | 1.7087 | 0.3228 | 0.2278 | 0.2671 | 0.4965 |
| No log | 13.0 | 52 | 1.6551 | 0.3485 | 0.2556 | 0.2949 | 0.5152 |
| No log | 14.0 | 56 | 1.6136 | 0.3219 | 0.2611 | 0.2883 | 0.5176 |
| No log | 15.0 | 60 | 1.5819 | 0.3510 | 0.2944 | 0.3202 | 0.5363 |
| No log | 16.0 | 64 | 1.5575 | 0.3506 | 0.3 | 0.3234 | 0.5410 |
| No log | 17.0 | 68 | 1.5394 | 0.3529 | 0.3 | 0.3243 | 0.5433 |
| No log | 18.0 | 72 | 1.5265 | 0.3791 | 0.3222 | 0.3483 | 0.5574 |
| No log | 19.0 | 76 | 1.5180 | 0.3766 | 0.3222 | 0.3473 | 0.5574 |
| No log | 20.0 | 80 | 1.5146 | 0.3782 | 0.3278 | 0.3512 | 0.5597 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
JAlexis/bertFast_02
|
JAlexis
| 2022-08-24T01:19:46Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-23T19:57:46Z |
---
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/bertFast_02"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
}
nlp(inputs)
```
|
jhonparra18/distilbert-base-uncased-ner_cv
|
jhonparra18
| 2022-08-23T22:28:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T22:11:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-ner_cv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-ner_cv
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8548
- Precision: 0.3327
- Recall: 0.2358
- F1: 0.2760
- Accuracy: 0.7815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 5.0 | 30 | 1.0790 | 0.0 | 0.0 | 0.0 | 0.7537 |
| No log | 10.0 | 60 | 0.9589 | 0.3208 | 0.1207 | 0.1754 | 0.7677 |
| No log | 15.0 | 90 | 0.8975 | 0.3363 | 0.1591 | 0.2160 | 0.7773 |
| No log | 20.0 | 120 | 0.8675 | 0.3354 | 0.2259 | 0.2699 | 0.7786 |
| No log | 25.0 | 150 | 0.8568 | 0.3333 | 0.2443 | 0.2820 | 0.7811 |
| No log | 30.0 | 180 | 0.8548 | 0.3327 | 0.2358 | 0.2760 | 0.7815 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.8.1+cu111
- Datasets 1.6.2
- Tokenizers 0.12.1
|
nbroad/rob-base-gc1
|
nbroad
| 2022-08-23T21:13:12Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"dataset:quoref",
"dataset:adversarial_qa",
"dataset:duorc",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-23T15:18:41Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
- quoref
- adversarial_qa
- duorc
model-index:
- name: rob-base-gc1
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 42.9
verified: true
- name: F1
type: f1
value: 53.8954
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 79.5382
verified: true
- name: F1
type: f1
value: 82.7221
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: quoref
type: quoref
config: default
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 78.403
verified: true
- name: F1
type: f1
value: 82.1408
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob-base-gc1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.10.0+cpu
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nbroad/deb-base-gc2
|
nbroad
| 2022-08-23T21:03:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"optimum_graphcore",
"deberta",
"question-answering",
"dataset:squad_v2",
"dataset:quoref",
"dataset:adversarial_qa",
"dataset:duorc",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-23T18:47:02Z |
---
datasets:
- squad_v2
- quoref
- adversarial_qa
- duorc
---
|
andres-hsn/a2c-AntBulletEnv-v0
|
andres-hsn
| 2022-08-23T20:38:32Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-23T20:37:36Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1378.24 +/- 479.43
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
niclas/models_sv_eric_1
|
niclas
| 2022-08-23T19:42:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-23T11:54:16Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: models_sv_eric_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models_sv_eric_1
This model is a fine-tuned version of [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1340
- Wer: 0.6241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 27.2483 | 5.81 | 250 | 12.8968 | 1.0 |
| 5.3813 | 11.63 | 500 | 3.7635 | 1.0 |
| 3.1776 | 17.44 | 750 | 3.1586 | 1.0 |
| 3.0849 | 23.26 | 1000 | 3.1336 | 1.0 |
| 3.0351 | 29.07 | 1250 | 3.0069 | 1.0 |
| 2.5591 | 34.88 | 1500 | 1.8101 | 0.9735 |
| 1.4236 | 40.7 | 1750 | 1.3666 | 0.8120 |
| 0.9233 | 46.51 | 2000 | 1.3338 | 0.7470 |
| 0.6594 | 52.33 | 2250 | 1.4020 | 0.7060 |
| 0.5056 | 58.14 | 2500 | 1.3793 | 0.7036 |
| 0.4135 | 63.95 | 2750 | 1.3789 | 0.6988 |
| 0.3521 | 69.77 | 3000 | 1.4288 | 0.6795 |
| 0.2728 | 75.58 | 3250 | 1.4819 | 0.6554 |
| 0.2419 | 81.4 | 3500 | 1.5370 | 0.6602 |
| 0.2288 | 87.21 | 3750 | 1.4477 | 0.6265 |
| 0.2009 | 93.02 | 4000 | 1.5387 | 0.6602 |
| 0.1773 | 98.84 | 4250 | 1.6789 | 0.6723 |
| 0.1701 | 104.65 | 4500 | 1.6322 | 0.6361 |
| 0.1562 | 110.47 | 4750 | 1.5988 | 0.6554 |
| 0.1433 | 116.28 | 5000 | 1.7502 | 0.6458 |
| 0.1373 | 122.09 | 5250 | 1.7735 | 0.6217 |
| 0.1186 | 127.91 | 5500 | 1.7193 | 0.6193 |
| 0.1127 | 133.72 | 5750 | 1.8742 | 0.6410 |
| 0.113 | 139.53 | 6000 | 1.8339 | 0.6337 |
| 0.1106 | 145.35 | 6250 | 1.7486 | 0.6289 |
| 0.0955 | 151.16 | 6500 | 1.7455 | 0.6361 |
| 0.0934 | 156.98 | 6750 | 1.8922 | 0.6361 |
| 0.0873 | 162.79 | 7000 | 2.0495 | 0.6530 |
| 0.0863 | 168.6 | 7250 | 1.8438 | 0.6361 |
| 0.0901 | 174.42 | 7500 | 2.0441 | 0.6289 |
| 0.0749 | 180.23 | 7750 | 2.0112 | 0.6265 |
| 0.0887 | 186.05 | 8000 | 2.0684 | 0.6554 |
| 0.074 | 191.86 | 8250 | 2.0821 | 0.6265 |
| 0.0714 | 197.67 | 8500 | 2.0790 | 0.6313 |
| 0.0638 | 203.49 | 8750 | 2.0158 | 0.6072 |
| 0.0633 | 209.3 | 9000 | 2.0423 | 0.6386 |
| 0.0621 | 215.12 | 9250 | 2.0013 | 0.6241 |
| 0.0616 | 220.93 | 9500 | 1.9567 | 0.6386 |
| 0.0627 | 226.74 | 9750 | 2.0302 | 0.6361 |
| 0.0604 | 232.56 | 10000 | 2.0424 | 0.6096 |
| 0.0551 | 238.37 | 10250 | 2.0238 | 0.6096 |
| 0.0559 | 244.19 | 10500 | 2.0207 | 0.6361 |
| 0.0587 | 250.0 | 10750 | 2.0818 | 0.6361 |
| 0.0508 | 255.81 | 11000 | 2.1106 | 0.6289 |
| 0.0494 | 261.63 | 11250 | 2.1194 | 0.6434 |
| 0.0576 | 267.44 | 11500 | 2.0752 | 0.6410 |
| 0.0521 | 273.26 | 11750 | 2.1455 | 0.6361 |
| 0.0479 | 279.07 | 12000 | 2.1583 | 0.6337 |
| 0.0501 | 284.88 | 12250 | 2.1400 | 0.6386 |
| 0.0447 | 290.7 | 12500 | 2.1440 | 0.6265 |
| 0.0455 | 296.51 | 12750 | 2.1340 | 0.6241 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0
- Datasets 1.13.3
- Tokenizers 0.10.3
|
samayl24/local-test-cifar-10
|
samayl24
| 2022-08-23T19:26:30Z | 0 | 0 | null |
[
"pytorch",
"vision",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2022-08-02T22:30:19Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
|
shalpin87/diffusion_conditional
|
shalpin87
| 2022-08-23T17:43:04Z | 69 | 0 |
diffusers
|
[
"diffusers",
"en",
"dataset:CelebA",
"license:apache-2.0",
"diffusers:DDPMConditionalPipeline",
"region:us"
] | null | 2022-08-15T23:23:16Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: CelebA
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# diffusion_conditional
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `CelebA` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 1
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(0.95, 0.999), weight_decay=1e-06 and epsilon=1e-08
- lr_scheduler: cosine
- lr_warmup_steps: 500
- ema_inv_gamma: 1.0
- ema_inv_gamma: 0.75
- ema_inv_gamma: 0.9999
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/shalpin87/diffusion_conditional/tensorboard?#scalars)
|
nbroad/rob-base-superqa2
|
nbroad
| 2022-08-23T17:05:47Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"optimum_habana",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"dataset:quoref",
"dataset:adversarial_qa",
"dataset:duorc",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-17T04:02:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
- quoref
- adversarial_qa
- duorc
model-index:
- name: rob-base-superqa2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 79.2365
verified: true
- name: F1
type: f1
value: 82.3326
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: test
metrics:
- name: Exact Match
type: exact_match
value: 12.4
verified: true
- name: F1
type: f1
value: 12.4
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 42.3667
verified: true
- name: F1
type: f1
value: 53.3255
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 86.1925
verified: true
- name: F1
type: f1
value: 92.4306
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rob-base-superqa2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0a0+gita4c10ee
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nguyenkhoa2407/camembert-base-NER-favsbot
|
nguyenkhoa2407
| 2022-08-23T16:44:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"generated_from_trainer",
"dataset:favsbot",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T16:37:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- favsbot
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: camembert-base-NER-favsbot
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: favsbot
type: favsbot
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.6
- name: Recall
type: recall
value: 0.012145748987854251
- name: F1
type: f1
value: 0.023809523809523808
- name: Accuracy
type: accuracy
value: 0.42078364565587734
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert-base-NER-favsbot
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7433
- Precision: 0.6
- Recall: 0.0121
- F1: 0.0238
- Accuracy: 0.4208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 2.2915 | 0.1364 | 0.1215 | 0.1285 | 0.3475 |
| No log | 2.0 | 8 | 2.2230 | 0.2909 | 0.0648 | 0.1060 | 0.4395 |
| No log | 3.0 | 12 | 2.1573 | 0.4545 | 0.0202 | 0.0388 | 0.4225 |
| No log | 4.0 | 16 | 2.0961 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 5.0 | 20 | 2.0426 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 6.0 | 24 | 1.9965 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 7.0 | 28 | 1.9575 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 8.0 | 32 | 1.9233 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 9.0 | 36 | 1.8933 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 10.0 | 40 | 1.8674 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 11.0 | 44 | 1.8441 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 12.0 | 48 | 1.8240 | 0.0 | 0.0 | 0.0 | 0.4123 |
| No log | 13.0 | 52 | 1.8060 | 1.0 | 0.0040 | 0.0081 | 0.4140 |
| No log | 14.0 | 56 | 1.7899 | 1.0 | 0.0040 | 0.0081 | 0.4140 |
| No log | 15.0 | 60 | 1.7762 | 1.0 | 0.0040 | 0.0081 | 0.4140 |
| No log | 16.0 | 64 | 1.7647 | 0.5 | 0.0040 | 0.0080 | 0.4157 |
| No log | 17.0 | 68 | 1.7556 | 0.5 | 0.0040 | 0.0080 | 0.4157 |
| No log | 18.0 | 72 | 1.7490 | 0.6667 | 0.0081 | 0.016 | 0.4174 |
| No log | 19.0 | 76 | 1.7449 | 0.75 | 0.0121 | 0.0239 | 0.4191 |
| No log | 20.0 | 80 | 1.7433 | 0.6 | 0.0121 | 0.0238 | 0.4208 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
shogumbo/testing2-multilabel-classifier
|
shogumbo
| 2022-08-23T16:40:47Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"multi_label_classification",
"text-classification",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2022-08-23T16:31:08Z |
---
pipeline_tag: "text-classification"
tags:
- "text-classification"
---
|
JAlexis/bert003
|
JAlexis
| 2022-08-23T16:15:53Z | 65 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-02T03:00:42Z |
---
language: en
#epoch 6
#batch size 16
#lr 5e-5
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/JAlexis/bert003"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
}
nlp(inputs)
```
|
gossminn/detect-femicide-news-xlmr-nl-mono-freeze2
|
gossminn
| 2022-08-23T14:40:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-23T14:27:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: detect-femicide-news-xlmr-nl-mono-freeze2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detect-femicide-news-xlmr-nl-mono-freeze2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6487
- Accuracy: 0.6429
- Precision Neg: 0.6429
- Precision Pos: 0.0
- Recall Neg: 1.0
- Recall Pos: 0.0
- F1 Score Neg: 0.7826
- F1 Score Pos: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Neg | Precision Pos | Recall Neg | Recall Pos | F1 Score Neg | F1 Score Pos |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:-------------:|:----------:|:----------:|:------------:|:------------:|
| 0.7312 | 1.0 | 23 | 0.7413 | 0.3571 | 0.0 | 0.3571 | 0.0 | 1.0 | 0.0 | 0.5263 |
| 0.7151 | 2.0 | 46 | 0.7177 | 0.3571 | 0.0 | 0.3571 | 0.0 | 1.0 | 0.0 | 0.5263 |
| 0.7049 | 3.0 | 69 | 0.6988 | 0.3571 | 0.0 | 0.3571 | 0.0 | 1.0 | 0.0 | 0.5263 |
| 0.6934 | 4.0 | 92 | 0.6945 | 0.3571 | 0.0 | 0.3571 | 0.0 | 1.0 | 0.0 | 0.5263 |
| 0.6886 | 5.0 | 115 | 0.6903 | 0.6071 | 0.8182 | 0.4706 | 0.5 | 0.8 | 0.6207 | 0.5926 |
| 0.6911 | 6.0 | 138 | 0.6846 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6856 | 7.0 | 161 | 0.6786 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6888 | 8.0 | 184 | 0.6783 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6862 | 9.0 | 207 | 0.6819 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6807 | 10.0 | 230 | 0.6758 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6839 | 11.0 | 253 | 0.6721 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6878 | 12.0 | 276 | 0.6708 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6799 | 13.0 | 299 | 0.6692 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6813 | 14.0 | 322 | 0.6673 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6792 | 15.0 | 345 | 0.6676 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6774 | 16.0 | 368 | 0.6683 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6807 | 17.0 | 391 | 0.6679 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6834 | 18.0 | 414 | 0.6693 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6784 | 19.0 | 437 | 0.6679 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.676 | 20.0 | 460 | 0.6698 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6791 | 21.0 | 483 | 0.6661 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6775 | 22.0 | 506 | 0.6633 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6688 | 23.0 | 529 | 0.6589 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6748 | 24.0 | 552 | 0.6580 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6771 | 25.0 | 575 | 0.6619 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6761 | 26.0 | 598 | 0.6639 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6773 | 27.0 | 621 | 0.6651 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6737 | 28.0 | 644 | 0.6656 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6721 | 29.0 | 667 | 0.6650 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6683 | 30.0 | 690 | 0.6612 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6663 | 31.0 | 713 | 0.6592 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6724 | 32.0 | 736 | 0.6576 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6739 | 33.0 | 759 | 0.6601 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6691 | 34.0 | 782 | 0.6602 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6652 | 35.0 | 805 | 0.6588 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6717 | 36.0 | 828 | 0.6596 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6637 | 37.0 | 851 | 0.6587 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6704 | 38.0 | 874 | 0.6579 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6608 | 39.0 | 897 | 0.6599 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6615 | 40.0 | 920 | 0.6580 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6662 | 41.0 | 943 | 0.6592 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6622 | 42.0 | 966 | 0.6616 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.664 | 43.0 | 989 | 0.6610 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.6695 | 44.0 | 1012 | 0.6570 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6677 | 45.0 | 1035 | 0.6557 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6705 | 46.0 | 1058 | 0.6546 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6591 | 47.0 | 1081 | 0.6547 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6675 | 48.0 | 1104 | 0.6532 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6622 | 49.0 | 1127 | 0.6544 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6571 | 50.0 | 1150 | 0.6552 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6678 | 51.0 | 1173 | 0.6555 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6596 | 52.0 | 1196 | 0.6544 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6583 | 53.0 | 1219 | 0.6517 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6641 | 54.0 | 1242 | 0.6508 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.671 | 55.0 | 1265 | 0.6502 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6645 | 56.0 | 1288 | 0.6513 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6604 | 57.0 | 1311 | 0.6510 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6644 | 58.0 | 1334 | 0.6509 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6617 | 59.0 | 1357 | 0.6528 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6608 | 60.0 | 1380 | 0.6536 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6533 | 61.0 | 1403 | 0.6533 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6596 | 62.0 | 1426 | 0.6518 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6607 | 63.0 | 1449 | 0.6511 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.658 | 64.0 | 1472 | 0.6509 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6546 | 65.0 | 1495 | 0.6514 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6613 | 66.0 | 1518 | 0.6516 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.662 | 67.0 | 1541 | 0.6506 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.661 | 68.0 | 1564 | 0.6503 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6571 | 69.0 | 1587 | 0.6497 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6656 | 70.0 | 1610 | 0.6500 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6637 | 71.0 | 1633 | 0.6508 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6519 | 72.0 | 1656 | 0.6518 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6593 | 73.0 | 1679 | 0.6516 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.6539 | 74.0 | 1702 | 0.6514 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.6568 | 75.0 | 1725 | 0.6506 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6581 | 76.0 | 1748 | 0.6504 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6557 | 77.0 | 1771 | 0.6499 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6542 | 78.0 | 1794 | 0.6500 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6557 | 79.0 | 1817 | 0.6498 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6637 | 80.0 | 1840 | 0.6493 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6603 | 81.0 | 1863 | 0.6490 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6568 | 82.0 | 1886 | 0.6485 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6561 | 83.0 | 1909 | 0.6490 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6665 | 84.0 | 1932 | 0.6499 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.655 | 85.0 | 1955 | 0.6492 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6509 | 86.0 | 1978 | 0.6493 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6549 | 87.0 | 2001 | 0.6493 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.655 | 88.0 | 2024 | 0.6489 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6576 | 89.0 | 2047 | 0.6493 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6612 | 90.0 | 2070 | 0.6492 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.6641 | 91.0 | 2093 | 0.6492 | 0.6071 | 0.6296 | 0.0 | 0.9444 | 0.0 | 0.7556 | 0.0 |
| 0.654 | 92.0 | 2116 | 0.6487 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6556 | 93.0 | 2139 | 0.6488 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6566 | 94.0 | 2162 | 0.6486 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6565 | 95.0 | 2185 | 0.6487 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6516 | 96.0 | 2208 | 0.6488 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6509 | 97.0 | 2231 | 0.6487 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6588 | 98.0 | 2254 | 0.6487 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6532 | 99.0 | 2277 | 0.6487 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
| 0.6548 | 100.0 | 2300 | 0.6487 | 0.6429 | 0.6429 | 0.0 | 1.0 | 0.0 | 0.7826 | 0.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HYM/Cbert_base_ws-finetuned-ner
|
HYM
| 2022-08-23T13:21:40Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-21T01:12:06Z |
---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Cbert_base_ws-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Cbert_base_ws-finetuned-ner
This model is a fine-tuned version of [ckiplab/bert-base-chinese-ws](https://huggingface.co/ckiplab/bert-base-chinese-ws) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0582
- Precision: 0.9602
- Recall: 0.9633
- F1: 0.9617
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 18
- eval_batch_size: 18
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0482 | 0.64 | 1000 | 0.0509 | 0.9601 | 0.9582 | 0.9592 | 0.9817 |
| 0.0364 | 1.28 | 2000 | 0.0521 | 0.9590 | 0.9615 | 0.9602 | 0.9820 |
| 0.0341 | 1.92 | 3000 | 0.0548 | 0.9546 | 0.9625 | 0.9585 | 0.9812 |
| 0.0264 | 2.56 | 4000 | 0.0550 | 0.9593 | 0.9623 | 0.9608 | 0.9822 |
| 0.0227 | 3.19 | 5000 | 0.0582 | 0.9602 | 0.9633 | 0.9617 | 0.9827 |
| 0.021 | 3.83 | 6000 | 0.0595 | 0.9581 | 0.9624 | 0.9603 | 0.9820 |
| 0.0162 | 4.47 | 7000 | 0.0686 | 0.9574 | 0.9626 | 0.9600 | 0.9819 |
| 0.0159 | 5.11 | 8000 | 0.0719 | 0.9596 | 0.9614 | 0.9605 | 0.9822 |
| 0.0144 | 5.75 | 9000 | 0.0732 | 0.9590 | 0.9620 | 0.9605 | 0.9822 |
| 0.0109 | 6.39 | 10000 | 0.0782 | 0.9599 | 0.9626 | 0.9612 | 0.9824 |
| 0.0122 | 7.03 | 11000 | 0.0803 | 0.9605 | 0.9620 | 0.9612 | 0.9825 |
| 0.0097 | 7.67 | 12000 | 0.0860 | 0.9591 | 0.9620 | 0.9605 | 0.9822 |
| 0.0087 | 8.31 | 13000 | 0.0877 | 0.9591 | 0.9616 | 0.9603 | 0.9821 |
| 0.0087 | 8.95 | 14000 | 0.0902 | 0.9585 | 0.9630 | 0.9607 | 0.9823 |
| 0.0078 | 9.58 | 15000 | 0.0929 | 0.9589 | 0.9621 | 0.9605 | 0.9821 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.8.0+cu111
- Datasets 2.4.0
- Tokenizers 0.10.3
|
T-Systems-onsite/cross-de-nl-roberta-sentence-transformer
|
T-Systems-onsite
| 2022-08-23T12:38:10Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"nl",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- nl
- de
license: mit
tags:
- sentence_embedding
---
|
T-Systems-onsite/cross-en-nl-it-roberta-sentence-transformer
|
T-Systems-onsite
| 2022-08-23T12:37:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"nl",
"it",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- nl
- it
license: mit
tags:
- sentence_embedding
---
|
T-Systems-onsite/cross-en-nl-fr-roberta-sentence-transformer
|
T-Systems-onsite
| 2022-08-23T12:37:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"nl",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- nl
- fr
license: mit
tags:
- sentence_embedding
---
|
model-attribution-challenge/gpt2-xl
|
model-attribution-challenge
| 2022-08-23T11:53:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-26T13:36:42Z |
---
language: en
license: mit
---
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
|
g1gaman/One_dream_one_soul
|
g1gaman
| 2022-08-23T11:36:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-23T11:35:58Z |
One dream, one soul
One prize, one goal
One golden glance of what should be
It's a kind of magic
|
jonas/roberta-base-finetuned-sdg
|
jonas
| 2022-08-23T09:49:42Z | 160 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-23T09:11:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-sdg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sdg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4993
- Acc: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4679 | 1.0 | 254 | 0.3660 | 0.8963 |
| 0.3578 | 2.0 | 508 | 0.3689 | 0.9019 |
| 0.2739 | 3.0 | 762 | 0.3284 | 0.9035 |
| 0.1841 | 4.0 | 1016 | 0.3763 | 0.9019 |
| 0.1127 | 5.0 | 1270 | 0.4174 | 0.9024 |
| 0.0822 | 6.0 | 1524 | 0.4523 | 0.9013 |
| 0.0329 | 7.0 | 1778 | 0.4829 | 0.9030 |
| 0.0157 | 8.0 | 2032 | 0.4993 | 0.9024 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0a0+8a1a93a
- Datasets 2.4.0
- Tokenizers 0.12.1
|
kws/Reinforce-2000steps
|
kws
| 2022-08-23T09:21:17Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-23T09:19:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2000steps
results:
- metrics:
- type: mean_reward
value: 213.70 +/- 9.52
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Shamus/mt5-base-finetuned-ar-to-en
|
Shamus
| 2022-08-23T08:56:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-23T07:28:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-base-finetuned-ar-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-ar-to-en
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Bleu: 0.0111
- Gen Len: 6.732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 163.1788 | 1.0 | 816 | nan | 0.0111 | 6.732 |
| 1.1396 | 2.0 | 1632 | nan | 0.0111 | 6.732 |
| 0.0381 | 3.0 | 2448 | nan | 0.0111 | 6.732 |
| 0.0 | 4.0 | 3264 | nan | 0.0111 | 6.732 |
| 155.5697 | 5.0 | 4080 | nan | 0.0111 | 6.732 |
| 74.9948 | 6.0 | 4896 | nan | 0.0111 | 6.732 |
| 0.116 | 6.13 | 5000 | nan | 0.0111 | 6.732 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
guoguo/distilbert-base-uncased-finetuned-squad-d5716d28
|
guoguo
| 2022-08-23T08:52:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-23T08:50:41Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
T-Systems-onsite/cross-en-pl-it-roberta-sentence-transformer
|
T-Systems-onsite
| 2022-08-23T07:18:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence_embedding",
"en",
"pl",
"it",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language:
- en
- pl
- it
license: mit
tags:
- sentence_embedding
---
|
ish97/bert-finetuned-ner-wnut17
|
ish97
| 2022-08-23T07:15:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T06:59:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-wnut17
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: train
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5301047120418848
- name: Recall
type: recall
value: 0.48444976076555024
- name: F1
type: f1
value: 0.50625
- name: Accuracy
type: accuracy
value: 0.9252876639015253
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-wnut17
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3444
- Precision: 0.5301
- Recall: 0.4844
- F1: 0.5062
- Accuracy: 0.9253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 425 | 0.3361 | 0.5602 | 0.4007 | 0.4672 | 0.9172 |
| 0.2009 | 2.0 | 850 | 0.3617 | 0.5331 | 0.4043 | 0.4599 | 0.9201 |
| 0.0947 | 3.0 | 1275 | 0.3444 | 0.5301 | 0.4844 | 0.5062 | 0.9253 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
abdulmatinomotoso/English_Grammar_Checker
|
abdulmatinomotoso
| 2022-08-23T07:13:02Z | 1,561 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-23T03:43:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: English_Grammar_Checker
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5324115893962171
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# English_Grammar_Checker
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1117
- Matthews Correlation: 0.5324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.527 | 1.0 | 1069 | 0.6183 | 0.3947 |
| 0.387 | 2.0 | 2138 | 0.5165 | 0.5156 |
| 0.2772 | 3.0 | 3207 | 0.6716 | 0.5211 |
| 0.176 | 4.0 | 4276 | 0.9270 | 0.5123 |
| 0.0975 | 5.0 | 5345 | 1.1117 | 0.5324 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
snunlp/KR-SBERT-V40K-klueNLI-augSTS
|
snunlp
| 2022-08-23T07:12:47Z | 249,821 | 60 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-05-03T03:34:16Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- ko
widget:
- source_sentence: "그 식당은 파리를 날린다"
sentences:
- "그 식당에는 손님이 없다"
- "그 식당에서는 드론을 날린다"
- "파리가 식당에 날아다닌다"
example_title: "Restaurant"
- source_sentence: "잠이 옵니다"
sentences:
- "잠이 안 옵니다"
- "졸음이 옵니다"
- "기차가 옵니다"
example_title: "Sleepy"
---
# snunlp/KR-SBERT-V40K-klueNLI-augSTS
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
model = AutoModel.from_pretrained('snunlp/KR-SBERT-V40K-klueNLI-augSTS')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=snunlp/KR-SBERT-V40K-klueNLI-augSTS)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Application for document classification
Tutorial in Google Colab: https://colab.research.google.com/drive/1S6WSjOx9h6Wh_rX1Z2UXwx9i_uHLlOiM
|Model|Accuracy|
|-|-|
|KR-SBERT-Medium-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-STS|0.8400|
|KR-SBERT-V40K-NLI-augSTS|0.8511|
|KR-SBERT-V40K-klueNLI-augSTS|**0.8628**|
## Citation
```bibtex
@misc{kr-sbert,
author = {Park, Suzi and Hyopil Shin},
title = {KR-SBERT: A Pre-trained Korean-specific Sentence-BERT model},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/snunlp/KR-SBERT}}
}
```
|
nguyenkhoa2407/autotrain-bert-NER-favsbot
|
nguyenkhoa2407
| 2022-08-23T06:38:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain",
"en",
"dataset:nguyenkhoa2407/autotrain-data-default_model_favsbot_data",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T06:35:43Z |
---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- nguyenkhoa2407/autotrain-data-default_model_favsbot_data
co2_eq_emissions:
emissions: 0.012034916031396342
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1300449813
- CO2 Emissions (in grams): 0.0120
## Validation Metrics
- Loss: 1.004
- Accuracy: 0.710
- Precision: 0.542
- Recall: 0.413
- F1: 0.468
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/nguyenkhoa2407/autotrain-default_model_favsbot_data-1300449813
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("nguyenkhoa2407/autotrain-default_model_favsbot_data-1300449813", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("nguyenkhoa2407/autotrain-default_model_favsbot_data-1300449813", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
chintagunta85/electramed-small-ADE-ner
|
chintagunta85
| 2022-08-23T05:45:15Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T05:40:55Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-ADE-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-ADE-ner
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1548
- Precision: 0.8358
- Recall: 0.9064
- F1: 0.8697
- Accuracy: 0.9581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5587 | 1.0 | 201 | 0.4107 | 0.7291 | 0.7982 | 0.7621 | 0.8983 |
| 0.2114 | 2.0 | 402 | 0.2663 | 0.7716 | 0.8826 | 0.8234 | 0.9445 |
| 0.1421 | 3.0 | 603 | 0.2183 | 0.8033 | 0.9030 | 0.8502 | 0.9488 |
| 0.2204 | 4.0 | 804 | 0.1878 | 0.8279 | 0.9012 | 0.8630 | 0.9553 |
| 0.5825 | 5.0 | 1005 | 0.1712 | 0.8289 | 0.8967 | 0.8615 | 0.9566 |
| 0.0685 | 6.0 | 1206 | 0.1647 | 0.8333 | 0.9067 | 0.8685 | 0.9572 |
| 0.0973 | 7.0 | 1407 | 0.1593 | 0.8365 | 0.9049 | 0.8693 | 0.9578 |
| 0.1683 | 8.0 | 1608 | 0.1574 | 0.8367 | 0.9082 | 0.8710 | 0.9577 |
| 0.065 | 9.0 | 1809 | 0.1557 | 0.8397 | 0.9052 | 0.8712 | 0.9583 |
| 0.179 | 10.0 | 2010 | 0.1548 | 0.8358 | 0.9064 | 0.8697 | 0.9581 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ish97/bert-finetuned-chunking
|
ish97
| 2022-08-23T05:20:26Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T04:55:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-chunking
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9229691876750701
- name: Recall
type: recall
value: 0.9217857559156079
- name: F1
type: f1
value: 0.9223770922027176
- name: Accuracy
type: accuracy
value: 0.961882616118208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-chunking
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Precision: 0.9230
- Recall: 0.9218
- F1: 0.9224
- Accuracy: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1887 | 1.0 | 1756 | 0.1793 | 0.9167 | 0.9112 | 0.9139 | 0.9573 |
| 0.128 | 2.0 | 3512 | 0.1552 | 0.9228 | 0.9187 | 0.9207 | 0.9609 |
| 0.091 | 3.0 | 5268 | 0.1594 | 0.9230 | 0.9218 | 0.9224 | 0.9619 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
VanHoan/codeparrot-ds
|
VanHoan
| 2022-08-23T04:44:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-23T04:20:33Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Teeto/reviews-classification
|
Teeto
| 2022-08-23T01:42:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-22T20:35:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: reviews-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reviews-classification
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5442
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 350 | 0.4666 | 0.86 |
| 0.4577 | 2.0 | 700 | 0.5500 | 0.8525 |
| 0.2499 | 3.0 | 1050 | 0.5442 | 0.875 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Tokenizers 0.12.1
|
VanHoan/mt5-small-finetuned-amazon-en-ja
|
VanHoan
| 2022-08-23T00:46:48Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-08-22T23:44:35Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-ja
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2749
- Rouge1: 16.6603
- Rouge2: 8.1096
- Rougel: 16.0117
- Rougelsum: 16.1001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.0415 | 1.0 | 773 | 3.6621 | 11.6952 | 4.8642 | 11.3154 | 11.3683 |
| 4.1249 | 2.0 | 1546 | 3.3933 | 14.3113 | 6.2067 | 13.9923 | 14.0476 |
| 3.7462 | 3.0 | 2319 | 3.3725 | 15.7855 | 8.0892 | 15.2485 | 15.3145 |
| 3.5608 | 4.0 | 3092 | 3.3270 | 16.0732 | 7.8202 | 15.4816 | 15.6421 |
| 3.4471 | 5.0 | 3865 | 3.2908 | 16.4399 | 7.6723 | 15.514 | 15.7309 |
| 3.3604 | 6.0 | 4638 | 3.2904 | 16.6074 | 8.3131 | 16.0711 | 16.1382 |
| 3.3081 | 7.0 | 5411 | 3.2827 | 16.2547 | 8.1096 | 15.6128 | 15.7097 |
| 3.2905 | 8.0 | 6184 | 3.2749 | 16.6603 | 8.1096 | 16.0117 | 16.1001 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
saahith/wav2vec2_base_100h_ngram
|
saahith
| 2022-08-22T22:20:01Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-22T21:42:48Z |
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
# Wav2Vec2-Base-100h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
# define function to read in sound file
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 6.1 | 13.5 |
|
spacemanidol/esci-mlm-us-bert-base-uncased
|
spacemanidol
| 2022-08-22T22:01:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-22T18:21:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esci-us-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esci-us-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1785
- Accuracy: 0.7499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 1.18.0
- Tokenizers 0.12.1
|
spacemanidol/esci-mlm-alllang-bert-base-uncased
|
spacemanidol
| 2022-08-22T21:15:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-22T18:20:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: esci-all-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esci-all-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0435
- Accuracy: 0.7740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 1.18.0
- Tokenizers 0.12.1
|
jerostephan/Architecture_Diffusion_1.5M
|
jerostephan
| 2022-08-22T20:36:04Z | 0 | 3 | null |
[
"region:us"
] | null | 2022-08-22T19:57:12Z |
# 512x512 Diffusion (Architecture fine-tuned)
## Detailed description
A 512x512 unconditional ImageNet diffusion model, fine-tuned for 900.000 samples from the 512x512 unconditional ImageNet diffusion model. It was fine-tuned using 60.000 images of architecture of the AIDA dataset from Harvard x ArchDaily.
## Config (as used in Disco Diffusion)
{
"attention_resolutions": '32, 16, 8',
"class_cond": False,
"diffusion_steps": 1000,
"image_size": 512,
"learn_sigma": True,
"noise_schedule": "linear",
"num_channels": 256,
"num_head_channels": 64,
"num_res_blocks": 2,
"resblock_updown": True,
"rescale_timesteps": True,
"timestep_respacing": "250",
"use_scale_shift_norm": True
}
---
license: cc
---
|
pinecone/movie-recommender-user-model
|
pinecone
| 2022-08-22T20:21:58Z | 0 | 2 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-07-31T21:04:13Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
gayanin/bart-paraphrasing-mlm-med-mask-filling
|
gayanin
| 2022-08-22T16:50:59Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-22T13:28:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrasing-mlm-med-mask-filling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrasing-mlm-med-mask-filling
This model is a fine-tuned version of [gayanin/bart-paraphrase-pubmed-1.1](https://huggingface.co/gayanin/bart-paraphrase-pubmed-1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2528
- Rouge2 Precision: 0.8317
- Rouge2 Recall: 0.5986
- Rouge2 Fmeasure: 0.6751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.3396 | 1.0 | 15827 | 0.3030 | 0.8186 | 0.5903 | 0.6652 |
| 0.2879 | 2.0 | 31654 | 0.2706 | 0.8257 | 0.5952 | 0.6708 |
| 0.2514 | 3.0 | 47481 | 0.2572 | 0.8295 | 0.5964 | 0.6729 |
| 0.2361 | 4.0 | 63308 | 0.2528 | 0.8317 | 0.5986 | 0.6751 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
JAlexis/bert_v2
|
JAlexis
| 2022-08-22T16:22:09Z | 67 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-22T16:16:52Z |
---
language: en
#epoch
#batch size
#lr
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "How can I protect myself against covid-19?"
context: " "
---
## Model description
This model was obtained by fine-tuning deepset/bert-base-cased-squad2 on Cord19 Dataset.
## How to use
```python
from transformers.pipelines import pipeline
model_name = "JAlexis/PruebaBert"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
'question': 'How can I protect myself against covid-19?',
'context': 'Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. [6] to the current context of the COVID-19 pandemic and the culture of the USA. Applying this model in a different time and context provides an opportunity to make comparisons of reactions to information sources across a decade of evolving attitudes toward media and government, between two cultures (Hong Kong vs. the USA), and between two considerably different global pandemics (H1N1 vs. COVID-19). ',
}
nlp(inputs)
```
|
obi/deid_bert_i2b2
|
obi
| 2022-08-22T13:28:40Z | 2,478 | 20 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"deidentification",
"medical notes",
"ehr",
"phi",
"en",
"dataset:I2B2",
"arxiv:1904.03323",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
thumbnail: "https://www.onebraveidea.org/wp-content/uploads/2019/07/OBI-Logo-Website.png"
tags:
- deidentification
- medical notes
- ehr
- phi
datasets:
- I2B2
metrics:
- F1
- Recall
- AUC
widget:
- text: "Physician Discharge Summary Admit date: 10/12/1982 Discharge date: 10/22/1982 Patient Information Jack Reacher, 54 y.o. male (DOB = 1/21/1928)."
- text: "Home Address: 123 Park Drive, San Diego, CA, 03245. Home Phone: 202-555-0199 (home)."
- text: "Hospital Care Team Service: Orthopedics Inpatient Attending: Roger C Kelly, MD Attending phys phone: (634)743-5135 Discharge Unit: HCS843 Primary Care Physician: Hassan V Kim, MD 512-832-5025."
license: mit
---
# Model Description
* A ClinicalBERT [[Alsentzer et al., 2019]](https://arxiv.org/pdf/1904.03323.pdf) model fine-tuned for de-identification of medical notes.
* Sequence Labeling (token classification): The model was trained to predict protected health information (PHI/PII) entities (spans). A list of protected health information categories is given by [HIPAA](https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html).
* A token can either be classified as non-PHI or as one of the 11 PHI types. Token predictions are aggregated to spans by making use of BILOU tagging.
* The PHI labels that were used for training and other details can be found here: [Annotation Guidelines](https://github.com/obi-ml-public/ehr_deidentification/blob/master/AnnotationGuidelines.md)
* More details on how to use this model, the format of data and other useful information is present in the GitHub repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification).
# How to use
* A demo on how the model works (using model predictions to de-identify a medical note) is on this space: [Medical-Note-Deidentification](https://huggingface.co/spaces/obi/Medical-Note-Deidentification).
* Steps on how this model can be used to run a forward pass can be found here: [Forward Pass](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/forward_pass)
* In brief, the steps are:
* Sentencize (the model aggregates the sentences back to the note level) and tokenize the dataset.
* Use the predict function of this model to gather the predictions (i.e., predictions for each token).
* Additionally, the model predictions can be used to remove PHI from the original note/text.
# Dataset
* The I2B2 2014 [[Stubbs and Uzuner, 2015]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4978170/) dataset was used to train this model.
| | I2B2 | | I2B2 | |
| --------- | --------------------- | ---------- | -------------------- | ---------- |
| | TRAIN SET - 790 NOTES | | TEST SET - 514 NOTES | |
| PHI LABEL | COUNT | PERCENTAGE | COUNT | PERCENTAGE |
| DATE | 7502 | 43.69 | 4980 | 44.14 |
| STAFF | 3149 | 18.34 | 2004 | 17.76 |
| HOSP | 1437 | 8.37 | 875 | 7.76 |
| AGE | 1233 | 7.18 | 764 | 6.77 |
| LOC | 1206 | 7.02 | 856 | 7.59 |
| PATIENT | 1316 | 7.66 | 879 | 7.79 |
| PHONE | 317 | 1.85 | 217 | 1.92 |
| ID | 881 | 5.13 | 625 | 5.54 |
| PATORG | 124 | 0.72 | 82 | 0.73 |
| EMAIL | 4 | 0.02 | 1 | 0.01 |
| OTHERPHI | 2 | 0.01 | 0 | 0 |
| TOTAL | 17171 | 100 | 11283 | 100 |
# Training procedure
* Steps on how this model was trained can be found here: [Training](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/train). The "model_name_or_path" was set to: "emilyalsentzer/Bio_ClinicalBERT".
* The dataset was sentencized with the en_core_sci_sm sentencizer from spacy.
* The dataset was then tokenized with a custom tokenizer built on top of the en_core_sci_sm tokenizer from spacy.
* For each sentence we added 32 tokens on the left (from previous sentences) and 32 tokens on the right (from the next sentences).
* The added tokens are not used for learning - i.e, the loss is not computed on these tokens - they are used as additional context.
* Each sequence contained a maximum of 128 tokens (including the 32 tokens added on). Longer sequences were split.
* The sentencized and tokenized dataset with the token level labels based on the BILOU notation was used to train the model.
* The model is fine-tuned from a pre-trained RoBERTa model.
* Training details:
* Input sequence length: 128
* Batch size: 32
* Optimizer: AdamW
* Learning rate: 4e-5
* Dropout: 0.1
# Results
# Questions?
Post a Github issue on the repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification).
|
brilianputraa/Lunar-LanderV2-v1
|
brilianputraa
| 2022-08-22T13:22:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-07T10:03:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -34.99 +/- 57.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VanHoan/marian-finetuned-kde4-en-to-vi
|
VanHoan
| 2022-08-22T13:16:19Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-08-22T12:45:16Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-vi
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-vi
split: train
args: en-vi
metrics:
- name: Bleu
type: bleu
value: 51.100833140674204
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-vi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2217
- Bleu: 51.1008
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
danieladejumo/MLAgents-Worm
|
danieladejumo
| 2022-08-22T13:13:48Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2022-08-22T13:13:42Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: danieladejumo/MLAgents-Worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
domsebalj/GPcroaT
|
domsebalj
| 2022-08-22T12:05:15Z | 10 | 2 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"GPT-2",
"hr",
"dataset:hrwac",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-09T10:49:13Z |
---
language: hr
tags:
- GPT-2
datasets:
- hrwac
---
If you use this model for own tasks, please share your results in the community tab.
With Tensorflow you can use:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained("domsebalj/GPcroaT")
model = TFGPT2LMHeadModel.from_pretrained("domsebalj/GPcroaT")
text = "Zamijeni ovaj tekst vlastitim"
input_ids = tokenizer.encode(text, return_tensors='tf')
beam_output = model.generate(
input_ids,
max_length = 80,
min_length = 10,
num_beams = 10,
temperature = 5.7,
no_repeat_ngram_size=2,
num_return_sequences=5,
repetition_penalty =7.5,
length_penalty = 1.5,
top_k = 50
)
output = []
for i in beam_output:
output.append(tokenizer.decode(i))
print(output)
```
|
alishudi/distil_mse_3
|
alishudi
| 2022-08-22T11:01:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-22T10:59:03Z |
--alpha_ce 0.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_act 1.0 --alpha_clm 0.0 --alpha_mse 0.0002 --mlm \
3 layers
|
orkg/orkgnlp-templates-recommendation
|
orkg
| 2022-08-22T10:24:11Z | 16 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-22T10:04:42Z |
---
license: mit
---
This Repository includes the files required to run the `Templates Recommendation` ORKG-NLP service.
Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service.
|
mekarahul/distilbert-base-uncased-finetuned-sent
|
mekarahul
| 2022-08-22T09:41:06Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-19T14:43:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sent
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5482
- Accuracy: 0.48
- F1: 0.3658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8587 | 1.0 | 100 | 1.2984 | 0.42 | 0.2603 |
| 0.7303 | 2.0 | 200 | 1.5482 | 0.48 | 0.3658 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.12.1
|
ericntay/ft_clinical_bert_diabetes
|
ericntay
| 2022-08-22T09:19:39Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-22T08:42:37Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ft_clinical_bert_diabetes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_clinical_bert_diabetes
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1020
- Accuracy: 0.9632
- F1: 0.9578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1641 | 1.0 | 1064 | 0.1604 | 0.9526 | 0.9462 |
| 0.1088 | 2.0 | 2128 | 0.0878 | 0.9623 | 0.9573 |
| 0.0956 | 3.0 | 3192 | 0.0963 | 0.9632 | 0.9578 |
| 0.0858 | 4.0 | 4256 | 0.1020 | 0.9632 | 0.9578 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
nghuyong/ernie-gram-zh
|
nghuyong
| 2022-08-22T09:10:05Z | 98 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"zh",
"arxiv:2010.12148",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-03T17:34:43Z |
---
language: zh
---
# ERNIE-Gram-zh
## Introduction
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
More detail: https://arxiv.org/abs/2010.12148
## Released Model Info
|Model Name|Language|Model Structure|
|:---:|:---:|:---:|
|ernie-gram-zh| Chinese |Layer:12, Hidden:768, Heads:12|
This released Pytorch model is converted from the officially released PaddlePaddle ERNIE model and
a series of experiments have been conducted to check the accuracy of the conversion.
- Official PaddlePaddle ERNIE repo: https://github.com/PaddlePaddle/PaddleNLP/blob/develop/paddlenlp/transformers/ernie_gram/modeling.py
- Pytorch Conversion repo: https://github.com/nghuyong/ERNIE-Pytorch
## How to use
```Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-gram-zh")
model = AutoModel.from_pretrained("nghuyong/ernie-gram-zh")
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.