modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
good-ai-club/NBB | good-ai-club | 2022-06-16T08:51:44Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-06-16T08:37:56Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3188 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 355,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1594,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
QuickSilver007/dqn-SpaceInvadersNoFrameskip-v4 | QuickSilver007 | 2022-06-16T08:24:26Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-16T08:23:46Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 580.00 +/- 135.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga QuickSilver007 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga QuickSilver007
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Jayaprakash/Grammar_correction | Jayaprakash | 2022-06-16T07:50:18Z | 3 | 0 | transformers | [
"transformers",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-10T10:00:53Z | https://github.com/kranti-gloify/grammarly/tree/Django_API_Final
|
waboucay/camembert-base-finetuned-repnum_wl-rua_wl_3_classes | waboucay | 2022-06-16T07:44:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-16T07:27:43Z | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 75.6 | 75.3 |
| test | 76.1 | 75.8 |
|
Corianas/PPO-QbertNoFrameskip-v4_1 | Corianas | 2022-06-16T07:17:48Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"QbertNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-16T07:17:21Z | ---
library_name: stable-baselines3
tags:
- QbertNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 570.00 +/- 192.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: QbertNoFrameskip-v4
type: QbertNoFrameskip-v4
---
# **PPO** Agent playing **QbertNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **QbertNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env QbertNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env QbertNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 1000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
twieland/MIX2_en-ja_helsinki | twieland | 2022-06-16T06:23:56Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-14T07:24:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX2_en-ja_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX2_en-ja_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-jap](https://huggingface.co/Helsinki-NLP/opus-mt-en-jap) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 3.5357 | 0.02 | 4000 | 2.9519 |
| 2.8601 | 0.04 | 8000 | 2.6962 |
| 2.6183 | 0.06 | 12000 | 2.5156 |
| 2.4731 | 0.08 | 16000 | 2.4312 |
| 2.3731 | 0.1 | 20000 | 2.3575 |
| 2.2964 | 0.11 | 24000 | 2.3319 |
| 2.238 | 0.13 | 28000 | 2.2802 |
| 2.1919 | 0.15 | 32000 | 2.2552 |
| 2.1479 | 0.17 | 36000 | 2.2354 |
| 2.1104 | 0.19 | 40000 | 2.2210 |
| 2.0788 | 0.21 | 44000 | 2.1835 |
| 2.0552 | 0.23 | 48000 | 2.1391 |
| 2.0228 | 0.25 | 52000 | 2.1338 |
| 2.0062 | 0.27 | 56000 | 2.1115 |
| 1.9868 | 0.29 | 60000 | 2.1025 |
| 1.9628 | 0.31 | 64000 | 2.1334 |
| 1.9474 | 0.32 | 68000 | 2.0935 |
| 1.9318 | 0.34 | 72000 | 2.1030 |
| 1.9187 | 0.36 | 76000 | 2.0605 |
| 1.9019 | 0.38 | 80000 | 2.0388 |
| 1.8916 | 0.4 | 84000 | 2.0360 |
| 1.8775 | 0.42 | 88000 | 2.0356 |
| 1.8689 | 0.44 | 92000 | 2.0315 |
| 1.8558 | 0.46 | 96000 | 2.0169 |
| 1.8431 | 0.48 | 100000 | 2.0213 |
| 1.8373 | 0.5 | 104000 | 2.0071 |
| 1.8224 | 0.52 | 108000 | 2.0093 |
| 1.8181 | 0.53 | 112000 | 1.9952 |
| 1.8087 | 0.55 | 116000 | 1.9927 |
| 1.7998 | 0.57 | 120000 | 1.9726 |
| 1.7947 | 0.59 | 124000 | 1.9817 |
| 1.7874 | 0.61 | 128000 | 1.9650 |
| 1.7781 | 0.63 | 132000 | 1.9688 |
| 1.7712 | 0.65 | 136000 | 1.9655 |
| 1.7631 | 0.67 | 140000 | 1.9561 |
| 1.7577 | 0.69 | 144000 | 1.9529 |
| 1.7528 | 0.71 | 148000 | 1.9447 |
| 1.746 | 0.73 | 152000 | 1.9700 |
| 1.7386 | 0.74 | 156000 | 1.9413 |
| 1.7329 | 0.76 | 160000 | 1.9329 |
| 1.7285 | 0.78 | 164000 | 1.9289 |
| 1.7227 | 0.8 | 168000 | 1.9337 |
| 1.7186 | 0.82 | 172000 | 1.9263 |
| 1.7116 | 0.84 | 176000 | 1.9407 |
| 1.7072 | 0.86 | 180000 | 1.9059 |
| 1.7032 | 0.88 | 184000 | 1.9380 |
| 1.6932 | 0.9 | 188000 | 1.9183 |
| 1.6921 | 0.92 | 192000 | 1.9131 |
| 1.6875 | 0.94 | 196000 | 1.9180 |
| 1.6846 | 0.96 | 200000 | 1.9040 |
| 1.6797 | 0.97 | 204000 | 1.9089 |
| 1.6725 | 0.99 | 208000 | 1.9024 |
| 1.6589 | 1.01 | 212000 | 1.8909 |
| 1.6507 | 1.03 | 216000 | 1.8837 |
| 1.6441 | 1.05 | 220000 | 1.8906 |
| 1.6445 | 1.07 | 224000 | 1.8914 |
| 1.6394 | 1.09 | 228000 | 1.8833 |
| 1.6382 | 1.11 | 232000 | 1.8837 |
| 1.6376 | 1.13 | 236000 | 1.8869 |
| 1.6329 | 1.15 | 240000 | 1.8829 |
| 1.6294 | 1.17 | 244000 | 1.8845 |
| 1.6273 | 1.18 | 248000 | 1.8888 |
| 1.6243 | 1.2 | 252000 | 1.8709 |
| 1.6226 | 1.22 | 256000 | 1.8418 |
| 1.6177 | 1.24 | 260000 | 1.8587 |
| 1.6151 | 1.26 | 264000 | 1.8526 |
| 1.6111 | 1.28 | 268000 | 1.8494 |
| 1.6084 | 1.3 | 272000 | 1.8781 |
| 1.6043 | 1.32 | 276000 | 1.8390 |
| 1.6011 | 1.34 | 280000 | 1.8603 |
| 1.5999 | 1.36 | 284000 | 1.8515 |
| 1.5954 | 1.38 | 288000 | 1.8356 |
| 1.5936 | 1.39 | 292000 | 1.8530 |
| 1.5916 | 1.41 | 296000 | 1.8475 |
| 1.5886 | 1.43 | 300000 | 1.8410 |
| 1.5883 | 1.45 | 304000 | 1.8153 |
| 1.5828 | 1.47 | 308000 | 1.8254 |
| 1.582 | 1.49 | 312000 | 1.8139 |
| 1.578 | 1.51 | 316000 | 1.8366 |
| 1.5723 | 1.53 | 320000 | 1.8353 |
| 1.5705 | 1.55 | 324000 | 1.8230 |
| 1.5691 | 1.57 | 328000 | 1.8194 |
| 1.5656 | 1.59 | 332000 | 1.8069 |
| 1.566 | 1.6 | 336000 | 1.8204 |
| 1.5604 | 1.62 | 340000 | 1.8307 |
| 1.5573 | 1.64 | 344000 | 1.8209 |
| 1.5547 | 1.66 | 348000 | 1.8320 |
| 1.5545 | 1.68 | 352000 | 1.8179 |
| 1.5519 | 1.7 | 356000 | 1.8323 |
| 1.545 | 1.72 | 360000 | 1.8005 |
| 1.5483 | 1.74 | 364000 | 1.8034 |
| 1.5454 | 1.76 | 368000 | 1.7997 |
| 1.5393 | 1.78 | 372000 | 1.8078 |
| 1.5381 | 1.8 | 376000 | 1.8204 |
| 1.5347 | 1.81 | 380000 | 1.8071 |
| 1.5327 | 1.83 | 384000 | 1.7997 |
| 1.529 | 1.85 | 388000 | 1.8012 |
| 1.5287 | 1.87 | 392000 | 1.8028 |
| 1.5273 | 1.89 | 396000 | 1.8103 |
| 1.5194 | 1.91 | 400000 | 1.8008 |
| 1.5197 | 1.93 | 404000 | 1.8004 |
| 1.5218 | 1.95 | 408000 | 1.8024 |
| 1.514 | 1.97 | 412000 | 1.7852 |
| 1.5146 | 1.99 | 416000 | 1.7908 |
| 1.5045 | 2.01 | 420000 | 1.7864 |
| 1.4876 | 2.02 | 424000 | 1.7813 |
| 1.4846 | 2.04 | 428000 | 1.7822 |
| 1.4865 | 2.06 | 432000 | 1.7737 |
| 1.4857 | 2.08 | 436000 | 1.7668 |
| 1.4825 | 2.1 | 440000 | 1.7681 |
| 1.4828 | 2.12 | 444000 | 1.7685 |
| 1.4821 | 2.14 | 448000 | 1.7636 |
| 1.4778 | 2.16 | 452000 | 1.7778 |
| 1.4803 | 2.18 | 456000 | 1.7834 |
| 1.4766 | 2.2 | 460000 | 1.7801 |
| 1.4741 | 2.22 | 464000 | 1.7601 |
| 1.4705 | 2.23 | 468000 | 1.7665 |
| 1.4739 | 2.25 | 472000 | 1.7604 |
| 1.4694 | 2.27 | 476000 | 1.7803 |
| 1.4665 | 2.29 | 480000 | 1.7835 |
| 1.4668 | 2.31 | 484000 | 1.7670 |
| 1.4605 | 2.33 | 488000 | 1.7629 |
| 1.4626 | 2.35 | 492000 | 1.7612 |
| 1.4627 | 2.37 | 496000 | 1.7612 |
| 1.4569 | 2.39 | 500000 | 1.7557 |
| 1.455 | 2.41 | 504000 | 1.7599 |
| 1.4547 | 2.43 | 508000 | 1.7569 |
| 1.453 | 2.44 | 512000 | 1.7589 |
| 1.4515 | 2.46 | 516000 | 1.7679 |
| 1.4501 | 2.48 | 520000 | 1.7574 |
| 1.4446 | 2.5 | 524000 | 1.7526 |
| 1.4456 | 2.52 | 528000 | 1.7506 |
| 1.4445 | 2.54 | 532000 | 1.7484 |
| 1.4428 | 2.56 | 536000 | 1.7447 |
| 1.439 | 2.58 | 540000 | 1.7468 |
| 1.441 | 2.6 | 544000 | 1.7609 |
| 1.4358 | 2.62 | 548000 | 1.7498 |
| 1.4318 | 2.64 | 552000 | 1.7592 |
| 1.4276 | 2.65 | 556000 | 1.7452 |
| 1.4317 | 2.67 | 560000 | 1.7500 |
| 1.4277 | 2.69 | 564000 | 1.7392 |
| 1.4259 | 2.71 | 568000 | 1.7351 |
| 1.4239 | 2.73 | 572000 | 1.7385 |
| 1.4191 | 2.75 | 576000 | 1.7487 |
| 1.4204 | 2.77 | 580000 | 1.7392 |
| 1.4176 | 2.79 | 584000 | 1.7372 |
| 1.4147 | 2.81 | 588000 | 1.7347 |
| 1.4154 | 2.83 | 592000 | 1.7085 |
| 1.4134 | 2.85 | 596000 | 1.7103 |
| 1.4091 | 2.87 | 600000 | 1.7124 |
| 1.4091 | 2.88 | 604000 | 1.7369 |
| 1.406 | 2.9 | 608000 | 1.7142 |
| 1.4028 | 2.92 | 612000 | 1.7376 |
| 1.4019 | 2.94 | 616000 | 1.7201 |
| 1.4018 | 2.96 | 620000 | 1.7230 |
| 1.3959 | 2.98 | 624000 | 1.7206 |
| 1.3985 | 3.0 | 628000 | 1.7183 |
| 1.3681 | 3.02 | 632000 | 1.7283 |
| 1.3668 | 3.04 | 636000 | 1.7330 |
| 1.3687 | 3.06 | 640000 | 1.7187 |
| 1.3681 | 3.08 | 644000 | 1.7163 |
| 1.3687 | 3.09 | 648000 | 1.7249 |
| 1.364 | 3.11 | 652000 | 1.7283 |
| 1.364 | 3.13 | 656000 | 1.7091 |
| 1.3652 | 3.15 | 660000 | 1.7030 |
| 1.3623 | 3.17 | 664000 | 1.7058 |
| 1.3604 | 3.19 | 668000 | 1.7101 |
| 1.3598 | 3.21 | 672000 | 1.7104 |
| 1.3577 | 3.23 | 676000 | 1.7028 |
| 1.3574 | 3.25 | 680000 | 1.7023 |
| 1.3546 | 3.27 | 684000 | 1.7197 |
| 1.3549 | 3.29 | 688000 | 1.7045 |
| 1.3534 | 3.3 | 692000 | 1.6990 |
| 1.3511 | 3.32 | 696000 | 1.6971 |
| 1.3504 | 3.34 | 700000 | 1.6894 |
| 1.346 | 3.36 | 704000 | 1.6820 |
| 1.3467 | 3.38 | 708000 | 1.6920 |
| 1.3461 | 3.4 | 712000 | 1.6897 |
| 1.3425 | 3.42 | 716000 | 1.6962 |
| 1.34 | 3.44 | 720000 | 1.6864 |
| 1.3408 | 3.46 | 724000 | 1.6860 |
| 1.3387 | 3.48 | 728000 | 1.6924 |
| 1.3377 | 3.5 | 732000 | 1.6919 |
| 1.3378 | 3.51 | 736000 | 1.6858 |
| 1.334 | 3.53 | 740000 | 1.6816 |
| 1.3347 | 3.55 | 744000 | 1.6867 |
| 1.3307 | 3.57 | 748000 | 1.6859 |
| 1.3316 | 3.59 | 752000 | 1.6896 |
| 1.3257 | 3.61 | 756000 | 1.6824 |
| 1.3222 | 3.63 | 760000 | 1.6819 |
| 1.3247 | 3.65 | 764000 | 1.6809 |
| 1.3207 | 3.67 | 768000 | 1.6775 |
| 1.3227 | 3.69 | 772000 | 1.6807 |
| 1.3203 | 3.71 | 776000 | 1.6750 |
| 1.3203 | 3.72 | 780000 | 1.6758 |
| 1.316 | 3.74 | 784000 | 1.6787 |
| 1.3147 | 3.76 | 788000 | 1.6747 |
| 1.3146 | 3.78 | 792000 | 1.6718 |
| 1.3137 | 3.8 | 796000 | 1.6744 |
| 1.3143 | 3.82 | 800000 | 1.6733 |
| 1.3123 | 3.84 | 804000 | 1.6754 |
| 1.3069 | 3.86 | 808000 | 1.6734 |
| 1.3122 | 3.88 | 812000 | 1.6742 |
| 1.3074 | 3.9 | 816000 | 1.6742 |
| 1.3006 | 3.92 | 820000 | 1.6709 |
| 1.308 | 3.93 | 824000 | 1.6714 |
| 1.3063 | 3.95 | 828000 | 1.6727 |
| 1.3036 | 3.97 | 832000 | 1.6711 |
| 1.3048 | 3.99 | 836000 | 1.6703 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Corianas/SkiingNoFrameskip-v4_ScoringTest | Corianas | 2022-06-16T06:22:47Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SkiingNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-16T06:20:38Z | ---
library_name: stable-baselines3
tags:
- SkiingNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -30000.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SkiingNoFrameskip-v4
type: SkiingNoFrameskip-v4
---
# **PPO** Agent playing **SkiingNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SkiingNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env SkiingNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env SkiingNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env SkiingNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env SkiingNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
ouiame/T5_mlsum | ouiame | 2022-06-16T05:31:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"fr",
"dataset:ouiame/autotrain-data-trainproject",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-15T13:51:07Z | ---
tags: autotrain
language: fr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-trainproject
co2_eq_emissions: 976.8219757938544
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 985232789
- CO2 Emissions (in grams): 976.8219757938544
## Validation Metrics
- Loss: 1.7047555446624756
- Rouge1: 20.2108
- Rouge2: 7.8633
- RougeL: 16.9554
- RougeLsum: 17.3178
- Gen Len: 18.9874
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-trainproject-985232789
``` |
huggingtweets/shammytv | huggingtweets | 2022-06-16T05:07:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-16T04:38:10Z | ---
language: en
thumbnail: http://www.huggingtweets.com/shammytv/1655356038315/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1191610860973764608/vH0nHzO8_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Swift</div>
<div style="text-align: center; font-size: 14px;">@shammytv</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Swift.
| Data | Swift |
| --- | --- |
| Tweets downloaded | 3203 |
| Retweets | 173 |
| Short tweets | 449 |
| Tweets kept | 2581 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/12udt9tp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shammytv's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wp1epufz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wp1epufz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/shammytv')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
YYSH/Test-demo-colab | YYSH | 2022-06-16T04:40:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-16T02:32:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: Test-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Test-demo-colab
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9479
- Wer: 0.6856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.2676 | 1.0 | 500 | 2.2725 | 1.0013 |
| 2.0086 | 2.01 | 1000 | 1.2788 | 0.8053 |
| 1.6389 | 3.01 | 1500 | 1.1333 | 0.7458 |
| 1.4908 | 4.02 | 2000 | 1.0369 | 0.7356 |
| 1.4137 | 5.02 | 2500 | 0.9894 | 0.7111 |
| 1.3507 | 6.02 | 3000 | 0.9394 | 0.7098 |
| 1.3101 | 7.03 | 3500 | 0.9531 | 0.6966 |
| 1.2682 | 8.03 | 4000 | 0.9255 | 0.6892 |
| 1.239 | 9.04 | 4500 | 0.9222 | 0.6818 |
| 1.2161 | 10.04 | 5000 | 0.9079 | 0.6911 |
| 1.1871 | 11.04 | 5500 | 0.9100 | 0.7033 |
| 1.1688 | 12.05 | 6000 | 0.9080 | 0.6924 |
| 1.1383 | 13.05 | 6500 | 0.9097 | 0.6910 |
| 1.1304 | 14.06 | 7000 | 0.9052 | 0.6810 |
| 1.1181 | 15.06 | 7500 | 0.9025 | 0.6847 |
| 1.0905 | 16.06 | 8000 | 0.9296 | 0.6832 |
| 1.0744 | 17.07 | 8500 | 0.9120 | 0.6912 |
| 1.0675 | 18.07 | 9000 | 0.9039 | 0.6864 |
| 1.0511 | 19.08 | 9500 | 0.9157 | 0.7004 |
| 1.0401 | 20.08 | 10000 | 0.9259 | 0.6792 |
| 1.0319 | 21.08 | 10500 | 0.9478 | 0.6976 |
| 1.0194 | 22.09 | 11000 | 0.9438 | 0.6820 |
| 1.0117 | 23.09 | 11500 | 0.9577 | 0.6891 |
| 1.0038 | 24.1 | 12000 | 0.9670 | 0.6918 |
| 0.9882 | 25.1 | 12500 | 0.9579 | 0.6884 |
| 0.9979 | 26.1 | 13000 | 0.9502 | 0.6869 |
| 0.9767 | 27.11 | 13500 | 0.9537 | 0.6833 |
| 0.964 | 28.11 | 14000 | 0.9525 | 0.6880 |
| 0.9867 | 29.12 | 14500 | 0.9479 | 0.6856 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
huggingtweets/pronewchaos | huggingtweets | 2022-06-16T04:13:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-16T01:03:53Z | ---
language: en
thumbnail: http://www.huggingtweets.com/pronewchaos/1655352793305/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1519208550865653760/gxiNIWdv_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Saitoshi Nanomoto 🌑⚛️🟥</div>
<div style="text-align: center; font-size: 14px;">@pronewchaos</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Saitoshi Nanomoto 🌑⚛️🟥.
| Data | Saitoshi Nanomoto 🌑⚛️🟥 |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 18 |
| Short tweets | 617 |
| Tweets kept | 2615 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3b2f6bkt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pronewchaos's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1lho9s4n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1lho9s4n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pronewchaos')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sasuke/bert-base-uncased-finetuned-sst2 | sasuke | 2022-06-16T03:58:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-13T03:38:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9323394495412844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2982
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1817 | 1.0 | 4210 | 0.2920 | 0.9186 |
| 0.1297 | 2.0 | 8420 | 0.3069 | 0.9209 |
| 0.0978 | 3.0 | 12630 | 0.2982 | 0.9323 |
| 0.062 | 4.0 | 16840 | 0.3278 | 0.9312 |
| 0.0303 | 5.0 | 21050 | 0.3642 | 0.9323 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/acai28 | huggingtweets | 2022-06-16T03:39:49Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-16T03:32:47Z | ---
language: en
thumbnail: http://www.huggingtweets.com/acai28/1655350773093/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1527251112604184576/3dKVjGwK_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">alec</div>
<div style="text-align: center; font-size: 14px;">@acai28</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from alec.
| Data | alec |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 165 |
| Short tweets | 488 |
| Tweets kept | 2592 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rd31m5h3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @acai28's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/w8y3ix5h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/w8y3ix5h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/acai28')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Willy/bert-base-spanish-wwm-cased-finetuned-NLP-IE | Willy | 2022-06-15T23:52:05Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-15T23:25:26Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-NLP-IE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-NLP-IE
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6260
- Accuracy: 0.7015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6052 | 1.0 | 9 | 0.6370 | 0.7015 |
| 0.5501 | 2.0 | 18 | 0.6260 | 0.7015 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
huggingtweets/hotdogsladies | huggingtweets | 2022-06-15T23:01:56Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-15T23:00:51Z | ---
language: en
thumbnail: http://www.huggingtweets.com/hotdogsladies/1655334112277/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474526156430798849/0Z_zfYqH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Merlin Mann</div>
<div style="text-align: center; font-size: 14px;">@hotdogsladies</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Merlin Mann.
| Data | Merlin Mann |
| --- | --- |
| Tweets downloaded | 314 |
| Retweets | 41 |
| Short tweets | 48 |
| Tweets kept | 225 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/epnyc8a1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hotdogsladies's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bjnvmjn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bjnvmjn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hotdogsladies')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fourthbrain-demo/finetuning-sentiment-model-3000-samples | fourthbrain-demo | 2022-06-15T22:51:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-15T22:18:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3023
- Accuracy: 0.8767
- F1: 0.8771
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
emilys/BERTweet-WNUT17 | emilys | 2022-06-15T22:31:22Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"en",
"dataset:wnut_17",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-14T22:59:18Z | ---
language:
- en
tags:
- NER
datasets:
- wnut_17
---
bertweet-base (https://huggingface.co/vinai/bertweet-base) finetuned on WNUT (2017), following https://github.com/huggingface/transformers/tree/main/examples/legacy/token-classification |
ml6team/distilbert-base-german-cased-toxic-comments | ml6team | 2022-06-15T22:10:04Z | 117 | 11 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"german",
"classification",
"de",
"dataset:germeval21",
"arxiv:1701.08118",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- de
tags:
- distilbert
- german
- classification
datasets:
- germeval21
widget:
- text: "Das ist ein guter Punkt, so hatte ich das noch nicht betrachtet."
example_title: "Agreement (non-toxic)"
- text: "Wow, was ein geiles Spiel. Glückwunsch."
example_title: "Football (non-toxic)"
- text: "Halt deine scheiß Fresse, du Arschloch"
example_title: "Silence (toxic)"
- text: "Verpiss dich, du dreckiger Hurensohn."
example_title: "Dismiss (toxic)"
---
# German Toxic Comment Classification
## Model Description
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we fine-tuned a German DistilBERT model [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on a combination of five German datasets containing toxicity, profanity, offensive, or hate speech.
## Intended Uses & Limitations
This model can be used to detect toxicity in German comments.
However, the definition of toxicity is vague and the model might not be able to detect all instances of toxicity.
It will not be able to detect toxicity in languages other than German.
## How to Use
```python
from transformers import pipeline
model_hub_url = 'https://huggingface.co/ml6team/distilbert-base-german-cased-toxic-comments'
model_name = 'ml6team/distilbert-base-german-cased-toxic-comments'
toxicity_pipeline = pipeline('text-classification', model=model_name, tokenizer=model_name)
comment = "Ein harmloses Beispiel"
result = toxicity_pipeline(comment)[0]
print(f"Comment: {comment}\nLabel: {result['label']}, score: {result['score']}")
```
## Limitations and Bias
The model was trained on a combinations of datasets that contain examples gathered from different social networks and internet communities. This only represents a narrow subset of possible instances of toxicity and instances in other domains might not be detected reliably.
## Training Data
The training dataset combines the following five datasets:
* GermEval18 [[dataset](https://github.com/uds-lsv/GermEval-2018-Data)]
* Labels: abuse, profanity, toxicity
* GermEval21 [[dataset](https://github.com/germeval2021toxic/SharedTask/tree/main/Data%20Sets)]
* Labels: toxicity
* IWG Hatespeech dataset [[paper](https://arxiv.org/pdf/1701.08118.pdf), [dataset](https://github.com/UCSM-DUE/IWG_hatespeech_public)]
* Labels: hate speech
* Detecting Offensive Statements Towards Foreigners in Social Media (2017) by Breitschneider and Peters [[dataset](http://ub-web.de/research/)]
* Labels: hate
* HASOC: 2019 Hate Speech and Offensive Content [[dataset](https://hasocfire.github.io/hasoc/2019/index.html)]
* Labels: offensive, profanity, hate
The datasets contains different labels ranging from profanity, over hate speech to toxicity. In the combined dataset these labels were subsumed as `toxic` and `non-toxic` and contains 23,515 examples in total.
Note that the datasets vary substantially in the number of examples.
## Training Procedure
The training and test set were created using either the predefined train/test splits where available and otherwise 80% of the examples for training and 20% for testing. This resulted in in 17,072 training examples and 6,443 test examples.
The model was trained for 2 epochs with the following arguments:
```python
training_args = TrainingArguments(
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=2,
evaluation_strategy="steps",
logging_strategy="steps",
logging_steps=100,
save_total_limit=5,
learning_rate=2e-5,
weight_decay=0.01,
metric_for_best_model='accuracy',
load_best_model_at_end=True
)
```
## Evaluation Results
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| -------- | -------- | -------- | ----------- |
| 78.50 | 50.34 | 39.22 | 70.27 |
|
tuni/xlm-roberta-large-xnli-finetuned-mnli | tuni | 2022-06-15T21:46:28Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-15T09:57:35Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: xlm-roberta-large-xnli-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8548888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-xnli-finetuned-mnli
This model is a fine-tuned version of [joeddav/xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2542
- Accuracy: 0.8549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7468 | 1.0 | 2250 | 0.8551 | 0.8348 |
| 0.567 | 2.0 | 4500 | 0.8935 | 0.8377 |
| 0.318 | 3.0 | 6750 | 0.9892 | 0.8492 |
| 0.1146 | 4.0 | 9000 | 1.2373 | 0.8446 |
| 0.0383 | 5.0 | 11250 | 1.2542 | 0.8549 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
emilys/BERTweet-CoNLL | emilys | 2022-06-15T21:19:05Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"NER",
"en",
"dataset:conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-14T22:41:31Z | ---
language:
- en
tags:
- NER
datasets:
- conll2003
---
bertweet-base (https://huggingface.co/vinai/bertweet-base) finetuned on CoNLL (2003) English, following https://github.com/huggingface/transformers/tree/main/examples/legacy/token-classification |
jianyang/dqn-SpaceInvadersNoFrameskip-v4 | jianyang | 2022-06-15T20:31:27Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-15T20:30:43Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 699.00 +/- 184.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jianyang -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jianyang
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ouiame/bert2gpt2Summy | ouiame | 2022-06-15T19:31:08Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"fr",
"dataset:ouiame/autotrain-data-trainproject",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-15T13:08:46Z | ---
tags: autotrain
language: fr
widget:
- text: "I love AutoTrain 🤗"
datasets:
- ouiame/autotrain-data-trainproject
co2_eq_emissions: 894.9753853627794
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 985232782
- CO2 Emissions (in grams): 894.9753853627794
## Validation Metrics
- Loss: 1.9692628383636475
- Rouge1: 19.3642
- Rouge2: 7.3644
- RougeL: 16.148
- RougeLsum: 16.4988
- Gen Len: 18.9975
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ouiame/autotrain-trainproject-985232782
``` |
castorini/afriberta_small | castorini | 2022-06-15T18:20:16Z | 158 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | Hugging Face's logo
---
language:
- om
- am
- rw
- rn
- ha
- ig
- pcm
- so
- sw
- ti
- yo
- multilingual
---
# afriberta_small
## Model description
AfriBERTa small is a pretrained multilingual language model with around 97 million parameters.
The model has 4 layers, 6 attention heads, 768 hidden units and 3072 feed forward size.
The model was pretrained on 11 African languages namely - Afaan Oromoo (also called Oromo), Amharic, Gahuza (a mixed language containing Kinyarwanda and Kirundi), Hausa, Igbo, Nigerian Pidgin, Somali, Swahili, Tigrinya and Yorùbá.
The model has been shown to obtain competitive downstream performances on text classification and Named Entity Recognition on several African languages, including those it was not pretrained on.
## Intended uses & limitations
#### How to use
You can use this model with Transformers for any downstream task.
For example, assuming we want to finetune this model on a token classification task, we do the following:
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> model = AutoModelForTokenClassification.from_pretrained("castorini/afriberta_small")
>>> tokenizer = AutoTokenizer.from_pretrained("castorini/afriberta_small")
# we have to manually set the model max length because it is an imported trained sentencepiece model, which huggingface does not properly support right now
>>> tokenizer.model_max_length = 512
```
#### Limitations and bias
- This model is possibly limited by its training dataset which are majorly obtained from news articles from a specific span of time. Thus, it may not generalize well.
- This model is trained on very little data (less than 1 GB), hence it may not have seen enough data to learn very complex linguistic relations.
## Training data
The model was trained on an aggregation of datasets from the BBC news website and Common Crawl.
## Training procedure
For information on training procedures, please refer to the AfriBERTa [paper]() or [repository](https://github.com/keleog/afriberta)
### BibTeX entry and citation info
```
@inproceedings{ogueji-etal-2021-small,
title = "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages",
author = "Ogueji, Kelechi and
Zhu, Yuxin and
Lin, Jimmy",
booktitle = "Proceedings of the 1st Workshop on Multilingual Representation Learning",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.mrl-1.11",
pages = "116--126",
}
```
|
Vkt/model-960hfacebook-2022.06.08 | Vkt | 2022-06-15T18:17:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-08T16:16:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: model-960hfacebook-2022.06.08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-960hfacebook-2022.06.08
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2907
- Wer: 0.1804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.7634 | 0.21 | 300 | 2.9743 | 0.9998 |
| 1.6536 | 0.43 | 600 | 0.8605 | 0.7529 |
| 0.9823 | 0.64 | 900 | 0.6600 | 0.6286 |
| 0.8708 | 0.86 | 1200 | 0.5780 | 0.5736 |
| 0.7878 | 1.07 | 1500 | 0.5386 | 0.5326 |
| 0.7033 | 1.29 | 1800 | 0.4986 | 0.4992 |
| 0.681 | 1.5 | 2100 | 0.4575 | 0.4778 |
| 0.6537 | 1.72 | 2400 | 0.4591 | 0.4482 |
| 0.6263 | 1.93 | 2700 | 0.4317 | 0.4353 |
| 0.5811 | 2.14 | 3000 | 0.4149 | 0.4159 |
| 0.5565 | 2.36 | 3300 | 0.4170 | 0.3956 |
| 0.5501 | 2.57 | 3600 | 0.4007 | 0.3929 |
| 0.5444 | 2.79 | 3900 | 0.3930 | 0.3851 |
| 0.5177 | 3.0 | 4200 | 0.4006 | 0.3630 |
| 0.4682 | 3.22 | 4500 | 0.3707 | 0.3713 |
| 0.4805 | 3.43 | 4800 | 0.3564 | 0.3583 |
| 0.4715 | 3.65 | 5100 | 0.3596 | 0.3434 |
| 0.4482 | 3.86 | 5400 | 0.3555 | 0.3394 |
| 0.4407 | 4.07 | 5700 | 0.3680 | 0.3312 |
| 0.4134 | 4.29 | 6000 | 0.3534 | 0.3328 |
| 0.4165 | 4.5 | 6300 | 0.3294 | 0.3259 |
| 0.4196 | 4.72 | 6600 | 0.3353 | 0.3214 |
| 0.4117 | 4.93 | 6900 | 0.3266 | 0.3211 |
| 0.3847 | 5.15 | 7200 | 0.3365 | 0.3156 |
| 0.3687 | 5.36 | 7500 | 0.3233 | 0.3014 |
| 0.376 | 5.58 | 7800 | 0.3345 | 0.2979 |
| 0.3732 | 5.79 | 8100 | 0.3105 | 0.2882 |
| 0.3705 | 6.0 | 8400 | 0.3252 | 0.2935 |
| 0.3311 | 6.22 | 8700 | 0.3266 | 0.2911 |
| 0.3386 | 6.43 | 9000 | 0.2975 | 0.2765 |
| 0.337 | 6.65 | 9300 | 0.3070 | 0.2826 |
| 0.3458 | 6.86 | 9600 | 0.3090 | 0.2766 |
| 0.3218 | 7.08 | 9900 | 0.3117 | 0.2748 |
| 0.3041 | 7.29 | 10200 | 0.2989 | 0.2651 |
| 0.3031 | 7.51 | 10500 | 0.3210 | 0.2672 |
| 0.3037 | 7.72 | 10800 | 0.3040 | 0.2667 |
| 0.3126 | 7.93 | 11100 | 0.2867 | 0.2613 |
| 0.3005 | 8.15 | 11400 | 0.3075 | 0.2610 |
| 0.2802 | 8.36 | 11700 | 0.3129 | 0.2608 |
| 0.2785 | 8.58 | 12000 | 0.3002 | 0.2579 |
| 0.2788 | 8.79 | 12300 | 0.3063 | 0.2476 |
| 0.286 | 9.01 | 12600 | 0.2971 | 0.2495 |
| 0.2534 | 9.22 | 12900 | 0.2766 | 0.2452 |
| 0.2542 | 9.44 | 13200 | 0.2893 | 0.2405 |
| 0.2576 | 9.65 | 13500 | 0.3038 | 0.2518 |
| 0.2552 | 9.86 | 13800 | 0.2851 | 0.2429 |
| 0.2487 | 10.08 | 14100 | 0.2858 | 0.2356 |
| 0.2441 | 10.29 | 14400 | 0.2999 | 0.2364 |
| 0.2345 | 10.51 | 14700 | 0.2907 | 0.2373 |
| 0.2352 | 10.72 | 15000 | 0.2885 | 0.2402 |
| 0.2464 | 10.94 | 15300 | 0.2896 | 0.2339 |
| 0.2219 | 11.15 | 15600 | 0.2999 | 0.2351 |
| 0.2257 | 11.37 | 15900 | 0.2930 | 0.2326 |
| 0.2184 | 11.58 | 16200 | 0.2980 | 0.2353 |
| 0.2182 | 11.79 | 16500 | 0.2832 | 0.2296 |
| 0.2224 | 12.01 | 16800 | 0.2797 | 0.2285 |
| 0.1991 | 12.22 | 17100 | 0.2810 | 0.2296 |
| 0.1993 | 12.44 | 17400 | 0.2949 | 0.2253 |
| 0.2042 | 12.65 | 17700 | 0.2864 | 0.2207 |
| 0.2083 | 12.87 | 18000 | 0.2860 | 0.2278 |
| 0.1998 | 13.08 | 18300 | 0.2872 | 0.2232 |
| 0.1919 | 13.3 | 18600 | 0.2894 | 0.2247 |
| 0.1925 | 13.51 | 18900 | 0.3007 | 0.2234 |
| 0.1966 | 13.72 | 19200 | 0.2831 | 0.2176 |
| 0.1942 | 13.94 | 19500 | 0.2811 | 0.2161 |
| 0.1778 | 14.15 | 19800 | 0.2901 | 0.2196 |
| 0.1755 | 14.37 | 20100 | 0.2864 | 0.2188 |
| 0.1795 | 14.58 | 20400 | 0.2927 | 0.2170 |
| 0.1817 | 14.8 | 20700 | 0.2846 | 0.2156 |
| 0.1754 | 15.01 | 21000 | 0.3036 | 0.2137 |
| 0.1674 | 15.23 | 21300 | 0.2876 | 0.2156 |
| 0.171 | 15.44 | 21600 | 0.2812 | 0.2106 |
| 0.1603 | 15.65 | 21900 | 0.2692 | 0.2093 |
| 0.1663 | 15.87 | 22200 | 0.2745 | 0.2094 |
| 0.1608 | 16.08 | 22500 | 0.2807 | 0.2043 |
| 0.1555 | 16.3 | 22800 | 0.2872 | 0.2036 |
| 0.1546 | 16.51 | 23100 | 0.2837 | 0.2049 |
| 0.1515 | 16.73 | 23400 | 0.2746 | 0.2031 |
| 0.1571 | 16.94 | 23700 | 0.2767 | 0.2047 |
| 0.1498 | 17.16 | 24000 | 0.2837 | 0.2050 |
| 0.143 | 17.37 | 24300 | 0.2745 | 0.2038 |
| 0.1471 | 17.58 | 24600 | 0.2787 | 0.2004 |
| 0.1442 | 17.8 | 24900 | 0.2779 | 0.2005 |
| 0.1481 | 18.01 | 25200 | 0.2906 | 0.2021 |
| 0.1318 | 18.23 | 25500 | 0.2936 | 0.1991 |
| 0.1396 | 18.44 | 25800 | 0.2913 | 0.1984 |
| 0.144 | 18.66 | 26100 | 0.2806 | 0.1953 |
| 0.1341 | 18.87 | 26400 | 0.2896 | 0.1972 |
| 0.1375 | 19.09 | 26700 | 0.2937 | 0.2002 |
| 0.1286 | 19.3 | 27000 | 0.2929 | 0.1954 |
| 0.1242 | 19.51 | 27300 | 0.2968 | 0.1962 |
| 0.1305 | 19.73 | 27600 | 0.2879 | 0.1944 |
| 0.1287 | 19.94 | 27900 | 0.2850 | 0.1937 |
| 0.1286 | 20.16 | 28200 | 0.2910 | 0.1961 |
| 0.121 | 20.37 | 28500 | 0.2908 | 0.1912 |
| 0.1264 | 20.59 | 28800 | 0.2853 | 0.1904 |
| 0.1238 | 20.8 | 29100 | 0.2913 | 0.1926 |
| 0.117 | 21.02 | 29400 | 0.2907 | 0.1922 |
| 0.1154 | 21.23 | 29700 | 0.2902 | 0.1888 |
| 0.1142 | 21.44 | 30000 | 0.2854 | 0.1907 |
| 0.1168 | 21.66 | 30300 | 0.2918 | 0.1873 |
| 0.1168 | 21.87 | 30600 | 0.2897 | 0.1873 |
| 0.1105 | 22.09 | 30900 | 0.2951 | 0.1856 |
| 0.1134 | 22.3 | 31200 | 0.2842 | 0.1847 |
| 0.1111 | 22.52 | 31500 | 0.2884 | 0.1829 |
| 0.1088 | 22.73 | 31800 | 0.2991 | 0.1840 |
| 0.1139 | 22.94 | 32100 | 0.2876 | 0.1839 |
| 0.1078 | 23.16 | 32400 | 0.2899 | 0.1830 |
| 0.1087 | 23.37 | 32700 | 0.2927 | 0.1803 |
| 0.1076 | 23.59 | 33000 | 0.2924 | 0.1801 |
| 0.11 | 23.8 | 33300 | 0.2877 | 0.1804 |
| 0.1067 | 24.02 | 33600 | 0.2918 | 0.1799 |
| 0.1104 | 24.23 | 33900 | 0.2908 | 0.1809 |
| 0.1023 | 24.45 | 34200 | 0.2939 | 0.1807 |
| 0.0993 | 24.66 | 34500 | 0.2925 | 0.1802 |
| 0.1053 | 24.87 | 34800 | 0.2907 | 0.1804 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1+cu111
- Datasets 2.2.1
- Tokenizers 0.12.1
|
SimulSt/xlm-roberta-base-finetuned-panx-de | SimulSt | 2022-06-15T16:59:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-15T16:20:48Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Marscen/roberta-base-squad2-finetuned-squad2 | Marscen | 2022-06-15T16:15:26Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-14T14:50:48Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-squad2-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-squad2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6979 | 1.0 | 16478 | 0.8815 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.8.1+cu111
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ksabeh/albert-base-v2-attribute-correction-mlm | ksabeh | 2022-06-15T15:49:41Z | 5 | 0 | transformers | [
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-15T07:46:56Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ksabeh/albert-base-v2-mlm-electronics-attribute-correction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/albert-base-v2-mlm-electronics-attribute-correction
This model is a fine-tuned version of [ksabeh/albert-base-v2-mlm-electronics](https://huggingface.co/ksabeh/albert-base-v2-mlm-electronics) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0541
- Validation Loss: 0.0570
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36852, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1364 | 0.0743 | 0 |
| 0.0541 | 0.0570 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ncfrey/ChemGPT-1.2B | ncfrey | 2022-06-15T15:44:24Z | 116 | 13 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-11T20:16:48Z | ---
tags:
- chemistry
---
# ChemGPT 1.2B
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
``` |
ncfrey/ChemGPT-19M | ncfrey | 2022-06-15T15:19:57Z | 384 | 5 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-11T20:02:27Z | ---
tags:
- chemistry
---
# ChemGPT 19M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
``` |
ncfrey/ChemGPT-4.7M | ncfrey | 2022-06-15T15:17:11Z | 391 | 19 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"chemistry",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-05-11T19:54:55Z | ---
tags:
- chemistry
---
# ChemGPT 4.7M
ChemGPT is based on the GPT-Neo model and was introduced in the paper [Neural Scaling of Deep Chemical Models](https://chemrxiv.org/engage/chemrxiv/article-details/627bddd544bdd532395fb4b5).
## Model description
ChemGPT is a transformers model for generative molecular modeling, which was pretrained on the PubChem10M dataset.
## Intended uses & limitations
### How to use
You can use this model directly from the 🤗/transformers library.
### Limitations and bias
This model was trained on a subset of molecules from PubChem. You can use this model to generate molecules, but it is mostly intended to be used for investigations of the effects of pre-training and fine-tuning on downstream datasets.
## Training data
PubChem10M, a dataset of SMILES strings from PubChem, available via [DeepChem](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip).
## Training procedure
### Preprocessing
SMILES strings were converted to SELFIES using version 1.0.4 of the SELFIES library.
### Pretraining
See code in the [LitMatter repository](https://github.com/ncfrey/litmatter/blob/main/lit_models/lit_chemgpt.py).
### BibTeX entry and citation info
```
@article{frey_soklaski_axelrod_samsi_gomez-bombarelli_coley_gadepally_2022,
place={Cambridge}, title={Neural Scaling of Deep Chemical Models},
DOI={10.26434/chemrxiv-2022-3s512}, journal={ChemRxiv}, publisher={Cambridge Open Engage},
author={Frey, Nathan and Soklaski, Ryan and Axelrod, Simon and Samsi, Siddharth and Gomez-Bombarelli, Rafael and Coley, Connor and Gadepally, Vijay},
year={2022}} This content is a preprint and has not been peer-reviewed.
```
```
Frey, Nathan, Ryan Soklaski, Simon Axelrod, Siddharth Samsi, Rafael Gomez-Bombarelli, Connor Coley, and Vijay Gadepally.
"Neural Scaling of Deep Chemical Models." ChemRxiv (2022). Print. This content is a preprint and has not been peer-reviewed.
```
|
jkhan447/sarcasm-detection-Bert-base-uncased-CR-POS | jkhan447 | 2022-06-15T12:59:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-15T04:05:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-Bert-base-uncased-CR-POS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-Bert-base-uncased-CR-POS
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1816
- Accuracy: 0.5783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
tuni/distilbert-base-uncased-finetuned-mnli | tuni | 2022-06-15T12:57:52Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T21:50:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8204788588894549
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6574
- Accuracy: 0.8205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.5188 | 1.0 | 24544 | 0.4979 | 0.8047 |
| 0.4153 | 2.0 | 49088 | 0.4845 | 0.8147 |
| 0.3008 | 3.0 | 73632 | 0.5631 | 0.8204 |
| 0.2226 | 4.0 | 98176 | 0.6574 | 0.8205 |
| 0.189 | 5.0 | 122720 | 0.8209 | 0.8194 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
mikeluck/wav2vec2-base-timit-demo-google-colab | mikeluck | 2022-06-15T12:43:38Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-06-15T10:44:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5351
- Wer: 0.3384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.6311 | 1.0 | 500 | 2.6700 | 1.0 |
| 1.0104 | 2.01 | 1000 | 0.5289 | 0.5277 |
| 0.4483 | 3.01 | 1500 | 0.4576 | 0.4623 |
| 0.3089 | 4.02 | 2000 | 0.4483 | 0.4255 |
| 0.2278 | 5.02 | 2500 | 0.4463 | 0.4022 |
| 0.1886 | 6.02 | 3000 | 0.4653 | 0.3938 |
| 0.1578 | 7.03 | 3500 | 0.4624 | 0.3855 |
| 0.1429 | 8.03 | 4000 | 0.4420 | 0.3854 |
| 0.1244 | 9.04 | 4500 | 0.4980 | 0.3787 |
| 0.1126 | 10.04 | 5000 | 0.4311 | 0.3785 |
| 0.1082 | 11.04 | 5500 | 0.5114 | 0.3782 |
| 0.0888 | 12.05 | 6000 | 0.5392 | 0.3725 |
| 0.0835 | 13.05 | 6500 | 0.6011 | 0.3941 |
| 0.074 | 14.06 | 7000 | 0.5030 | 0.3652 |
| 0.0667 | 15.06 | 7500 | 0.5041 | 0.3583 |
| 0.0595 | 16.06 | 8000 | 0.5125 | 0.3605 |
| 0.0578 | 17.07 | 8500 | 0.5206 | 0.3592 |
| 0.0573 | 18.07 | 9000 | 0.5208 | 0.3643 |
| 0.0469 | 19.08 | 9500 | 0.4670 | 0.3537 |
| 0.0442 | 20.08 | 10000 | 0.5388 | 0.3497 |
| 0.0417 | 21.08 | 10500 | 0.5213 | 0.3581 |
| 0.0361 | 22.09 | 11000 | 0.5096 | 0.3465 |
| 0.0338 | 23.09 | 11500 | 0.5178 | 0.3459 |
| 0.0333 | 24.1 | 12000 | 0.5240 | 0.3490 |
| 0.0256 | 25.1 | 12500 | 0.5438 | 0.3464 |
| 0.0248 | 26.1 | 13000 | 0.5182 | 0.3412 |
| 0.0231 | 27.11 | 13500 | 0.5628 | 0.3423 |
| 0.0228 | 28.11 | 14000 | 0.5416 | 0.3419 |
| 0.0223 | 29.12 | 14500 | 0.5351 | 0.3384 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
magelang1337/Backlinks | magelang1337 | 2022-06-15T11:56:54Z | 0 | 0 | null | [
"region:us"
] | null | 2022-06-15T11:56:30Z | https://www.beesource.com/members/magelang1337.142760/#about
https://leasedadspace.com/frame.php?bfm_page=members/magelang1337&aid=magelang1337
https://www.jqwidgets.com/community/users/magelang1337/
https://metalstorm.net/users/magelang1337/profile
https://myanimelist.net/profile/mnhblog
https://forum.codeigniter.com/member.php?action=profile&uid=50438
https://purothemes.com/support/users/magelang1337/
https://talkmarkets.com/member/nutrisi25/
https://www.ngemu.com/members/magelang1337.723290/
https://lunarxtest.com/horizondrifters/community/profile/magelang1337/
https://lifedonefree.com/community/profile/magelang1337/
https://multijoueur.online/forum/profile/magelang1337/
https://www.prevailingtruth.net/community/profile/magelang1337/
https://confidentkidsborntosparkle.com/community/profile/magelng1337/
https://cyborg-guide.ru/forum/profile/magelang1337/
https://nvridersforum.com/profile/magelang1337/
http://cubeengine.com/forum.php?action=display_thread&thread_id=2745
https://forums.opensuse.org/showthread.php/564850-Full-system-crash-when-playing-games?p=3133991#post3133991
https://www.askalondoner.co.uk/community/profile/magelang1337/ |
AnyaSchen/rugpt3_esenin | AnyaSchen | 2022-06-15T11:26:44Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-15T10:28:01Z | This model was created as a fine-tuned GPT-3 medium model, which is tuned to the style of Yesenin's poetry in Russian. You can give her a word, a phrase, or just an empty line as an input, and she will give out a poem in Yesenin's style.
 |
ChrisUPM/BioBERT_Re_trained | ChrisUPM | 2022-06-15T11:10:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-04-18T12:54:24Z | PyTorch trained model on GAD dataset for relation classification, using BioBert weights. |
Corianas/dqn-BeamRiderNoFrameskip-v4 | Corianas | 2022-06-15T10:41:50Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-15T08:55:40Z | ---
library_name: stable-baselines3
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 3983.00 +/- 1512.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
---
# **DQN** Agent playing **BeamRiderNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **BeamRiderNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env BeamRiderNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env BeamRiderNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
fabiochiu/dqn-SpaceInvadersNoFrameskip-v4 | fabiochiu | 2022-06-15T10:32:49Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-15T10:32:10Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 631.50 +/- 84.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fabiochiu -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fabiochiu
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
rajendra-ml/Chandrayaan | rajendra-ml | 2022-06-15T10:16:42Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-15T10:16:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 176.75 +/- 18.07
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TinySuitStarfish/q-FrozenLake-v1-4x4-Slippery | TinySuitStarfish | 2022-06-15T10:09:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-15T10:09:31Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.72 +/- 0.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TinySuitStarfish/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
FritzOS/TEdetection_distiBERT_NER_final_8e | FritzOS | 2022-06-15T09:37:10Z | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-15T09:36:53Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_NER_final_8e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_final_8e
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_final_8e](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_final_8e) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0032
- Validation Loss: 0.0037
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 220743, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0032 | 0.0037 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.3.0
- Tokenizers 0.12.1
|
pere/eu-jav-categorisation | pere | 2022-06-15T08:20:30Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-29T20:40:09Z | Private sample code for running categorisation on the mT5X |
facebook/wav2vec2-conformer-rope-large-100h-ft | facebook | 2022-06-15T08:16:47Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-18T09:48:47Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
---
# Wav2Vec2-Conformer-Large-100h with Rotary Position Embeddings
Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960h hours of Librispeech and fine-tuned on **100 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-100h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rope-large-100h-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
``` |
facebook/wav2vec2-conformer-rel-pos-large-960h-ft | facebook | 2022-06-15T08:12:40Z | 659 | 5 | transformers | [
"transformers",
"pytorch",
"wav2vec2-conformer",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-04-18T09:17:37Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec2-conformer-rel-pos-large-960h-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.85
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.83
---
# Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings
Wav2Vec2-Conformer with relative position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-960h-ft")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-conformer-rel-pos-large-960h-ft")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-conformer-rel-pos-large-960h-ft** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ConformerForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ConformerForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.85 | 3.82 | |
facebook/wav2vec2-conformer-rope-large | facebook | 2022-06-15T08:12:09Z | 33 | 2 | transformers | [
"transformers",
"pytorch",
"wav2vec2-conformer",
"pretraining",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2010.05171",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-04-18T09:26:53Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Conformer-Large with Rotary Position Embeddings
Wav2Vec2 Conformer with rotary position embeddings, pretrained on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model. |
FritzOS/TEdetection_distiBERT_mLM_final_8e | FritzOS | 2022-06-15T07:55:31Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-15T07:55:17Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: TEdetection_distiBERT_mLM_final_8e
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_mLM_final_8e
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.3.0
- Tokenizers 0.12.1
|
lewtun/dog-vs-chicken | lewtun | 2022-06-15T07:09:02Z | 52 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-15T07:08:51Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: dog-vs-chicken
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# dog-vs-chicken
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### crispy fried chicken

#### poodle
 |
eslamxm/xlmroberta-finetuned-fa | eslamxm | 2022-06-15T06:53:15Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarization",
"fa",
"xlmroberta",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:pn_summary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-14T22:08:51Z | ---
tags:
- summarization
- fa
- xlmroberta
- Abstractive Summarization
- generated_from_trainer
datasets:
- pn_summary
model-index:
- name: xlmroberta-finetuned-fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta-finetuned-fa
This model is a fine-tuned version of [](https://huggingface.co/) on the pn_summary dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2286
- Rouge-1: 4.99
- Rouge-2: 0.0
- Rouge-l: 4.99
- Gen Len: 20.0
- Bertscore: 51.89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
seomh/distilbert-base-uncased-finetuned-squad | seomh | 2022-06-15T06:49:56Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-11T14:04:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2258 | 1.0 | 5533 | 0.0560 |
| 0.952 | 2.0 | 11066 | 0.0096 |
| 0.7492 | 3.0 | 16599 | 0.0083 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
eslamxm/mt5-base-finetuned-Spanish | eslamxm | 2022-06-15T05:13:08Z | 94 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"es",
"spanish",
"abstractive summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-14T18:45:17Z | ---
license: apache-2.0
tags:
- summarization
- mt5
- es
- spanish
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: mt5-base-finetuned-Spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-Spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1727
- Rouge-1: 28.11
- Rouge-2: 12.09
- Rouge-l: 24.62
- Gen Len: 18.73
- Bertscore: 72.25
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
huggingtweets/danny_macaskill-martynashton | huggingtweets | 2022-06-15T04:59:30Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-15T04:58:54Z | ---
language: en
thumbnail: http://www.huggingtweets.com/danny_macaskill-martynashton/1655269165002/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/770573812991754240/gyUr23bS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/616596420230021120/w-kK8IT6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Danny MacAskill & Martyn Ashton</div>
<div style="text-align: center; font-size: 14px;">@danny_macaskill-martynashton</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Danny MacAskill & Martyn Ashton.
| Data | Danny MacAskill | Martyn Ashton |
| --- | --- | --- |
| Tweets downloaded | 2971 | 3179 |
| Retweets | 505 | 810 |
| Short tweets | 79 | 136 |
| Tweets kept | 2387 | 2233 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31ege8zb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @danny_macaskill-martynashton's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/g4d86tk2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/g4d86tk2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/danny_macaskill-martynashton')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
steven123/Teeth_A | steven123 | 2022-06-15T02:42:35Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-15T02:42:24Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Teeth_A
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.4545454680919647
---
# Teeth_A
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth
 |
DLochmelis33/22s-dl-sentiment-1 | DLochmelis33 | 2022-06-15T01:07:08Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-15T01:01:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: 22s-dl-sentiment-1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9542333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 22s-dl-sentiment-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2574
- Accuracy: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
tanbwilson/q-Taxi-v3 | tanbwilson | 2022-06-15T01:04:48Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-15T01:04:42Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tanbwilson/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
enoriega/rule_learning_margin_1mm_spanpred | enoriega | 2022-06-15T00:55:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"endpoints_compatible",
"region:us"
] | null | 2022-06-11T02:59:23Z | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_spanpred
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_spanpred
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Margin Accuracy: 0.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.5448 | 0.16 | 20 | 0.5229 | 0.7717 |
| 0.4571 | 0.32 | 40 | 0.4292 | 0.8109 |
| 0.4296 | 0.48 | 60 | 0.4009 | 0.8193 |
| 0.4028 | 0.64 | 80 | 0.3855 | 0.8296 |
| 0.3878 | 0.8 | 100 | 0.3757 | 0.8334 |
| 0.3831 | 0.96 | 120 | 0.3643 | 0.8367 |
| 0.3591 | 1.12 | 140 | 0.3582 | 0.8393 |
| 0.3598 | 1.28 | 160 | 0.3533 | 0.8401 |
| 0.3635 | 1.44 | 180 | 0.3442 | 0.8427 |
| 0.3478 | 1.6 | 200 | 0.3406 | 0.8472 |
| 0.342 | 1.76 | 220 | 0.3352 | 0.8479 |
| 0.3327 | 1.92 | 240 | 0.3352 | 0.8486 |
| 0.3487 | 2.08 | 260 | 0.3293 | 0.8487 |
| 0.3387 | 2.24 | 280 | 0.3298 | 0.8496 |
| 0.3457 | 2.4 | 300 | 0.3279 | 0.8505 |
| 0.3483 | 2.56 | 320 | 0.3286 | 0.8510 |
| 0.3421 | 2.72 | 340 | 0.3245 | 0.8517 |
| 0.3332 | 2.88 | 360 | 0.3252 | 0.8517 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
steven123/Teeth_B | steven123 | 2022-06-15T00:31:50Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-15T00:31:36Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Teeth_B
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6800000071525574
---
# Teeth_B
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth
 |
Tstarshak/ppo-LunarLander-v2 | Tstarshak | 2022-06-15T00:17:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-14T23:30:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 184.36 +/- 74.26
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
steven123/teeth_test | steven123 | 2022-06-14T23:57:13Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-14T23:46:08Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: teeth_test
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5555555820465088
---
# teeth_test
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Good Teeth

#### Missing Teeth

#### Rotten Teeth
 |
tanbwilson/test2ppo-LunarLander-v2 | tanbwilson | 2022-06-14T23:53:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-14T23:53:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 196.23 +/- 70.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
huggingtweets/rangersfc | huggingtweets | 2022-06-14T20:58:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-14T20:58:15Z | ---
language: en
thumbnail: http://www.huggingtweets.com/rangersfc/1655240322192/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513529336107839491/OQuphidQ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rangers Football Club</div>
<div style="text-align: center; font-size: 14px;">@rangersfc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rangers Football Club.
| Data | Rangers Football Club |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 315 |
| Short tweets | 338 |
| Tweets kept | 2597 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3150wqc2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rangersfc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bzvo1hp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bzvo1hp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rangersfc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ahmeddbahaa/AraBART-finetuned-ar | ahmeddbahaa | 2022-06-14T20:41:43Z | 24 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-04-04T14:58:44Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: AraBART-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraBART-finetuned-ar
This model is a fine-tuned version of [moussaKam/AraBART](https://huggingface.co/moussaKam/AraBART) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7449
- Rouge-1: 31.08
- Rouge-2: 14.68
- Rouge-l: 27.36
- Gen Len: 19.64
- Bertscore: 73.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.4318 | 1.0 | 2345 | 3.7996 | 28.93 | 13.2 | 25.56 | 19.51 | 73.17 |
| 4.0338 | 2.0 | 4690 | 3.7483 | 30.29 | 14.24 | 26.73 | 19.5 | 73.59 |
| 3.8586 | 3.0 | 7035 | 3.7281 | 30.44 | 14.44 | 26.92 | 19.75 | 73.58 |
| 3.7289 | 4.0 | 9380 | 3.7204 | 30.55 | 14.49 | 26.88 | 19.66 | 73.73 |
| 3.6245 | 5.0 | 11725 | 3.7199 | 30.73 | 14.63 | 27.11 | 19.69 | 73.68 |
| 3.5392 | 6.0 | 14070 | 3.7221 | 30.85 | 14.65 | 27.21 | 19.7 | 73.77 |
| 3.4694 | 7.0 | 16415 | 3.7286 | 31.08 | 14.8 | 27.41 | 19.62 | 73.84 |
| 3.4126 | 8.0 | 18760 | 3.7384 | 31.06 | 14.77 | 27.41 | 19.64 | 73.82 |
| 3.3718 | 9.0 | 21105 | 3.7398 | 31.18 | 14.89 | 27.49 | 19.67 | 73.87 |
| 3.3428 | 10.0 | 23450 | 3.7449 | 31.19 | 14.88 | 27.44 | 19.68 | 73.87 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
tanbwilson/ppo-LunarLander-v2 | tanbwilson | 2022-06-14T20:31:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-14T20:31:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 270.14 +/- 22.06
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alireza1044/mobilebert_QNLI | Alireza1044 | 2022-06-14T19:54:02Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T15:54:12Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9068277503203368
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3731
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
cindy203cc/finetuning-sentiment-model-3000-samples | cindy203cc | 2022-06-14T19:16:33Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T18:55:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3187
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
eslamxm/mt5-base-arabic | eslamxm | 2022-06-14T18:08:07Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"arabic",
"ar",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-05-09T06:32:04Z | ---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-arabic
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on arabic subset on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2742
- Rouge-1: 22.86
- Rouge-2: 10.31
- Rouge-l: 20.85
- Gen Len: 19.0
- Bertscore: 71.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.2331 | 1.0 | 1172 | 3.5051 | 18.54 | 6.63 | 16.77 | 19.0 | 70.28 |
| 3.7075 | 2.0 | 2344 | 3.3737 | 19.99 | 7.94 | 18.19 | 19.0 | 70.79 |
| 3.5132 | 3.0 | 3516 | 3.3171 | 20.76 | 8.57 | 18.96 | 19.0 | 70.95 |
| 3.3859 | 4.0 | 4688 | 3.2811 | 21.49 | 8.99 | 19.51 | 19.0 | 71.19 |
| 3.3012 | 5.0 | 5860 | 3.2742 | 21.79 | 9.18 | 19.77 | 19.0 | 71.25 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jkhan447/sarcasm-detection-RoBerta-base-CR-POS | jkhan447 | 2022-06-14T16:55:38Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T08:00:20Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base-CR-POS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base-CR-POS
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
- Accuracy: 0.4977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
danieladejumo/darknet-coco-object_detection | danieladejumo | 2022-06-14T16:40:32Z | 0 | 2 | null | [
"object-detection",
"COCO",
"YOLO",
"Darknet",
"model-index",
"region:us"
] | object-detection | 2022-06-13T14:43:12Z | ---
tags:
- object-detection
- COCO
- YOLO
- Darknet
model-index:
- name: darknet-coco-object_detection
results:
- metrics:
- type: None
value: '1'
name: None
task:
type: object-detection
name: object-detection
dataset:
name: COCO
type: COCO
---
## Darknet Object Detection on the COCO dataset
This model uses a pretrained YOLO Darknet model to perform object detection on an input image. The model is able to identify 80 classes from the COCO dataset. The classes are listed here `config/coco.names`.
### Usage
Clone the repository using
```python
repo = Repository("/local_repo_name", clone_from="danieladejumo/darknet-coco-object_detection")
```
Run a detection by using the function `detect(path_to_image)` in the notebook `darknet-coco-object_detection.ipynb`. The output image with the detection rectangle and classes will be saved to `images/image_file_name-det.jpg`
|
chiranthans23/xlm-roberta-base-finetuned-panx-de | chiranthans23 | 2022-06-14T16:13:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-13T16:40:42Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Sebabrata/lmv2-g-aadhaar-236doc-06-14 | Sebabrata | 2022-06-14T15:12:43Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-14T14:24:48Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-aadhaar-236doc-06-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-aadhaar-236doc-06-14
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0427
- Aadhaar Precision: 0.9783
- Aadhaar Recall: 1.0
- Aadhaar F1: 0.9890
- Aadhaar Number: 45
- Dob Precision: 0.9787
- Dob Recall: 1.0
- Dob F1: 0.9892
- Dob Number: 46
- Gender Precision: 1.0
- Gender Recall: 0.9787
- Gender F1: 0.9892
- Gender Number: 47
- Name Precision: 0.9574
- Name Recall: 0.9375
- Name F1: 0.9474
- Name Number: 48
- Overall Precision: 0.9785
- Overall Recall: 0.9785
- Overall F1: 0.9785
- Overall Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Aadhaar Precision | Aadhaar Recall | Aadhaar F1 | Aadhaar Number | Dob Precision | Dob Recall | Dob F1 | Dob Number | Gender Precision | Gender Recall | Gender F1 | Gender Number | Name Precision | Name Recall | Name F1 | Name Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-------------:|:----------:|:------:|:----------:|:----------------:|:-------------:|:---------:|:-------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.0024 | 1.0 | 188 | 0.5819 | 0.9348 | 0.9556 | 0.9451 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9574 | 0.9783 | 47 | 0.5172 | 0.625 | 0.5660 | 48 | 0.8410 | 0.8817 | 0.8609 | 0.9744 |
| 0.4484 | 2.0 | 376 | 0.3263 | 0.8980 | 0.9778 | 0.9362 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.6842 | 0.8125 | 0.7429 | 48 | 0.8838 | 0.9409 | 0.9115 | 0.9733 |
| 0.2508 | 3.0 | 564 | 0.2230 | 0.9318 | 0.9111 | 0.9213 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8913 | 0.8542 | 0.8723 | 48 | 0.9560 | 0.9355 | 0.9457 | 0.9811 |
| 0.165 | 4.0 | 752 | 0.1728 | 0.9362 | 0.9778 | 0.9565 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8444 | 0.7917 | 0.8172 | 48 | 0.9457 | 0.9355 | 0.9405 | 0.9844 |
| 0.1081 | 5.0 | 940 | 0.0987 | 0.8958 | 0.9556 | 0.9247 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 1.0 | 0.9167 | 0.9565 | 48 | 0.9728 | 0.9624 | 0.9676 | 0.9928 |
| 0.0834 | 6.0 | 1128 | 0.0984 | 0.8980 | 0.9778 | 0.9362 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9574 | 0.9783 | 47 | 0.8148 | 0.9167 | 0.8627 | 48 | 0.9227 | 0.9624 | 0.9421 | 0.9833 |
| 0.0676 | 7.0 | 1316 | 0.0773 | 0.9362 | 0.9778 | 0.9565 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9111 | 0.8542 | 0.8817 | 48 | 0.9620 | 0.9516 | 0.9568 | 0.9894 |
| 0.0572 | 8.0 | 1504 | 0.0786 | 0.8235 | 0.9333 | 0.8750 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8936 | 0.875 | 0.8842 | 48 | 0.9263 | 0.9462 | 0.9362 | 0.9872 |
| 0.0481 | 9.0 | 1692 | 0.0576 | 0.9375 | 1.0 | 0.9677 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9362 | 0.9167 | 0.9263 | 48 | 0.9679 | 0.9731 | 0.9705 | 0.99 |
| 0.0349 | 10.0 | 1880 | 0.0610 | 0.9574 | 1.0 | 0.9783 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8958 | 0.8958 | 0.8958 | 48 | 0.9626 | 0.9677 | 0.9651 | 0.9894 |
| 0.0287 | 11.0 | 2068 | 0.0978 | 0.9091 | 0.8889 | 0.8989 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9348 | 0.8958 | 0.9149 | 48 | 0.9615 | 0.9409 | 0.9511 | 0.985 |
| 0.0297 | 12.0 | 2256 | 0.0993 | 0.9375 | 1.0 | 0.9677 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.7959 | 0.8125 | 0.8041 | 48 | 0.9312 | 0.9462 | 0.9387 | 0.9833 |
| 0.0395 | 13.0 | 2444 | 0.0824 | 0.9362 | 0.9778 | 0.9565 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.875 | 0.875 | 0.875 | 48 | 0.9519 | 0.9570 | 0.9544 | 0.9872 |
| 0.0333 | 14.0 | 2632 | 0.0788 | 0.8913 | 0.9111 | 0.9011 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9556 | 0.8958 | 0.9247 | 48 | 0.9617 | 0.9462 | 0.9539 | 0.9867 |
| 0.0356 | 15.0 | 2820 | 0.0808 | 0.84 | 0.9333 | 0.8842 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9565 | 0.9167 | 0.9362 | 48 | 0.9468 | 0.9570 | 0.9519 | 0.9867 |
| 0.0192 | 16.0 | 3008 | 0.0955 | 0.8462 | 0.9778 | 0.9072 | 45 | 0.9787 | 1.0 | 0.9892 | 46 | 0.9583 | 0.9787 | 0.9684 | 47 | 0.9070 | 0.8125 | 0.8571 | 48 | 0.9211 | 0.9409 | 0.9309 | 0.9822 |
| 0.016 | 17.0 | 3196 | 0.0936 | 0.9130 | 0.9333 | 0.9231 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9318 | 0.8542 | 0.8913 | 48 | 0.9615 | 0.9409 | 0.9511 | 0.9867 |
| 0.0218 | 18.0 | 3384 | 0.1009 | 0.9545 | 0.9333 | 0.9438 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8571 | 0.875 | 0.8660 | 48 | 0.9514 | 0.9462 | 0.9488 | 0.9844 |
| 0.0165 | 19.0 | 3572 | 0.0517 | 0.9574 | 1.0 | 0.9783 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9333 | 0.875 | 0.9032 | 48 | 0.9728 | 0.9624 | 0.9676 | 0.9906 |
| 0.0198 | 20.0 | 3760 | 0.0890 | 0.9167 | 0.9778 | 0.9462 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9149 | 0.8958 | 0.9053 | 48 | 0.9572 | 0.9624 | 0.9598 | 0.9867 |
| 0.0077 | 21.0 | 3948 | 0.0835 | 0.9574 | 1.0 | 0.9783 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.88 | 0.9167 | 0.8980 | 48 | 0.9577 | 0.9731 | 0.9653 | 0.9872 |
| 0.0088 | 22.0 | 4136 | 0.0427 | 0.9783 | 1.0 | 0.9890 | 45 | 0.9787 | 1.0 | 0.9892 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9574 | 0.9375 | 0.9474 | 48 | 0.9785 | 0.9785 | 0.9785 | 0.9939 |
| 0.0078 | 23.0 | 4324 | 0.0597 | 0.9574 | 1.0 | 0.9783 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8654 | 0.9375 | 0.9 | 48 | 0.9529 | 0.9785 | 0.9655 | 0.9889 |
| 0.0178 | 24.0 | 4512 | 0.0524 | 0.9574 | 1.0 | 0.9783 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 1.0 | 0.875 | 0.9333 | 48 | 0.9890 | 0.9624 | 0.9755 | 0.9922 |
| 0.012 | 25.0 | 4700 | 0.0637 | 0.9375 | 1.0 | 0.9677 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.8491 | 0.9375 | 0.8911 | 48 | 0.9430 | 0.9785 | 0.9604 | 0.9867 |
| 0.0135 | 26.0 | 4888 | 0.0668 | 0.9184 | 1.0 | 0.9574 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.86 | 0.8958 | 0.8776 | 48 | 0.9424 | 0.9677 | 0.9549 | 0.9867 |
| 0.0123 | 27.0 | 5076 | 0.0713 | 0.9565 | 0.9778 | 0.9670 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9375 | 0.9375 | 0.9375 | 48 | 0.9731 | 0.9731 | 0.9731 | 0.9911 |
| 0.0074 | 28.0 | 5264 | 0.0675 | 0.9362 | 0.9778 | 0.9565 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9 | 0.9375 | 0.9184 | 48 | 0.9577 | 0.9731 | 0.9653 | 0.99 |
| 0.0051 | 29.0 | 5452 | 0.0713 | 0.9362 | 0.9778 | 0.9565 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9167 | 0.9167 | 0.9167 | 48 | 0.9626 | 0.9677 | 0.9651 | 0.9906 |
| 0.0027 | 30.0 | 5640 | 0.0725 | 0.9362 | 0.9778 | 0.9565 | 45 | 1.0 | 1.0 | 1.0 | 46 | 1.0 | 0.9787 | 0.9892 | 47 | 0.9167 | 0.9167 | 0.9167 | 48 | 0.9626 | 0.9677 | 0.9651 | 0.9906 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
olivia371/finetuning-sentiment-model-3000-samples | olivia371 | 2022-06-14T15:05:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T11:52:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9253731343283581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2348
- Accuracy: 0.925
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Alireza1044/mobilebert_qqp | Alireza1044 | 2022-06-14T14:57:04Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T12:25:57Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8988869651249073
- name: F1
type: f1
value: 0.8670050100852366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qqp
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2458
- Accuracy: 0.8989
- F1: 0.8670
- Combined Score: 0.8829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.5
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
LDD/bert_from_scratch_wwm_new | LDD | 2022-06-14T14:32:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-06-14T14:27:55Z | 用新闻数据集从头开始进行全词mask预训练bert |
erin2321232/sad | erin2321232 | 2022-06-14T14:11:43Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-14T14:11:43Z | ---
license: bigscience-bloom-rail-1.0
---
|
Alireza1044/mobilebert_mnli | Alireza1044 | 2022-06-14T11:22:34Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T09:30:21Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8230268510984541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4595
- Accuracy: 0.8230
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.3
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Rekcul/q-Taxi-v3 | Rekcul | 2022-06-14T10:19:14Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-14T10:10:20Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Rekcul/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
jkhan447/sarcasm-detection-RoBerta-base-POS | jkhan447 | 2022-06-14T09:55:14Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T06:56:15Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sarcasm-detection-RoBerta-base-POS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sarcasm-detection-RoBerta-base-POS
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6651
- Accuracy: 0.607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
saiharsha/vit-base-beans | saiharsha | 2022-06-14T09:54:53Z | 56 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-06-14T09:44:21Z | ---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9699248120300752
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1824
- Accuracy: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.672 | 1.0 | 44 | 0.5672 | 0.9398 |
| 0.411 | 2.0 | 88 | 0.3027 | 0.9699 |
| 0.2542 | 3.0 | 132 | 0.2078 | 0.9699 |
| 0.1886 | 4.0 | 176 | 0.1882 | 0.9699 |
| 0.1931 | 5.0 | 220 | 0.1824 | 0.9699 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
zdreiosis/ff_analysis_4 | zdreiosis | 2022-06-14T09:44:05Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"gen_ffa",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T05:02:34Z | ---
license: apache-2.0
tags:
- gen_ffa
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ff_analysis_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ff_analysis_4
This model is a fine-tuned version of [zdreiosis/ff_analysis_4](https://huggingface.co/zdreiosis/ff_analysis_4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0022
- F1: 1.0
- Roc Auc: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:|
| No log | 1.47 | 50 | 0.0055 | 1.0 | 1.0 | 1.0 |
| No log | 2.94 | 100 | 0.0052 | 1.0 | 1.0 | 1.0 |
| No log | 4.41 | 150 | 0.0044 | 1.0 | 1.0 | 1.0 |
| No log | 5.88 | 200 | 0.0037 | 1.0 | 1.0 | 1.0 |
| No log | 7.35 | 250 | 0.0030 | 1.0 | 1.0 | 1.0 |
| No log | 8.82 | 300 | 0.0030 | 1.0 | 1.0 | 1.0 |
| No log | 10.29 | 350 | 0.0028 | 1.0 | 1.0 | 1.0 |
| No log | 11.76 | 400 | 0.0027 | 1.0 | 1.0 | 1.0 |
| No log | 13.24 | 450 | 0.0025 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 14.71 | 500 | 0.0022 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 16.18 | 550 | 0.0025 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 17.65 | 600 | 0.0023 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 19.12 | 650 | 0.0022 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 20.59 | 700 | 0.0022 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 22.06 | 750 | 0.0021 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 23.53 | 800 | 0.0020 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 25.0 | 850 | 0.0020 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 26.47 | 900 | 0.0019 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 27.94 | 950 | 0.0019 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 29.41 | 1000 | 0.0019 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
nboudad/Maghribert | nboudad | 2022-06-14T09:27:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-09T19:07:37Z | ---
widget:
- text: "جاب ليا [MASK] ."
example_title: "example1"
- text: "مشيت نجيب [MASK] فالفرماسيان ."
example_title: "example2"
--- |
Marscen/distilbert-base-uncased-finetuned-squad | Marscen | 2022-06-14T09:21:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-13T09:09:18Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2178 | 1.0 | 8235 | 1.1827 |
| 0.9355 | 2.0 | 16470 | 1.3283 |
| 0.761 | 3.0 | 24705 | 1.4052 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.8.1+cu111
- Datasets 2.2.2
- Tokenizers 0.12.1
|
dksari/en_pipeline | dksari | 2022-06-14T08:46:56Z | 3 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] | token-classification | 2022-06-14T08:34:43Z | ---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 100.00
- name: NER Recall
type: recall
value: 100.00
- name: NER F Score
type: f_score
value: 100.00
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.3.1,<3.4.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | MIT |
| **Author** | [Dian Kurniasari]() |
### Label Scheme
<details>
<summary>View label scheme (6 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `geneexpression`, `localization`, `negativeregulation`, `phosphorylation`, `positiveregulation`, `proteincatabolism` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TOK2VEC_LOSS` | 92639.74 |
| `NER_LOSS` | 12636.18 | |
efederici/mmarco-sentence-BERTino | efederici | 2022-06-14T08:36:11Z | 51 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"dataset:unicamp-dl/mmarco",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-06-01T18:20:17Z | ---
pipeline_tag: sentence-similarity
license: apache-2.0
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- unicamp-dl/mmarco
---
# mmarco-sentence-BERTino
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It was trained on [mmarco](https://huggingface.co/datasets/unicamp-dl/mmarco/viewer/italian/train).
<p align="center">
<img src="https://media.tate.org.uk/art/images/work/L/L04/L04294_9.jpg" width="600"> </br>
Mohan Samant, Midnight Fishing Party, 1978
</p>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/mmarco-sentence-BERTino')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/mmarco-sentence-BERTino')
model = AutoModel.from_pretrained('efederici/mmarco-sentence-BERTino')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
``` |
Alireza1044/mobilebert_mrpc | Alireza1044 | 2022-06-14T08:16:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-14T08:06:49Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8888888888888888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3672
- Accuracy: 0.8382
- F1: 0.8889
- Combined Score: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
shaojie/distilbert-base-uncased-finetuned-squad | shaojie | 2022-06-14T07:26:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-13T07:35:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1585
- eval_runtime: 138.1018
- eval_samples_per_second: 78.087
- eval_steps_per_second: 4.88
- epoch: 1.0
- step: 5533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
crodri/roberta-base-ca-v2-qa-catalanqa | crodri | 2022-06-14T06:11:33Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-13T15:05:05Z | ---
license: cc0-1.0
---
The roberta-base-ca-cased-qa is a Question Answering (QA) model for the Catalan language fine-tuned from the BERTa model, a RoBERTa base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the BERTa model card for more details).
Datasets
We used the Catalan QA datasets called ViquiQuAD, VilaQuad and XQuad\_ca with test, training and evaluation (90-10-10) splits, balanced by type of questions.
Test: 2255
Evaluation: 2276
Train: 18082 |
LDD/bert_mlm_new | LDD | 2022-06-14T05:43:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-06-11T09:46:01Z | 在bert-base-chinese基础上进行新闻语料库的增量预训练的模型,token采用的是hfl/chinese-bert-wwm-ext |
Wikidepia/albert-punctuation | Wikidepia | 2022-06-14T04:53:52Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"albert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-06-13T12:41:47Z | ---
tags:
- generated_from_trainer
model-index:
- name: albert-puncapital
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-puncapital
This model is a fine-tuned version of [indobert-lite-base-p2](https://huggingface.co/indobert-lite-base-p2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
praf-choub/bart-mofe-rl-xsum | praf-choub | 2022-06-14T04:52:41Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"en",
"dataset:xsum",
"arxiv:2110.07166",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-05-03T18:08:08Z | ---
language: en
tags:
- summarization
license: bsd-3-clause
datasets:
- xsum
---
Citation
```
@article{DBLP:journals/corr/abs-2110-07166,
author = {Prafulla Kumar Choubey and
Jesse Vig and
Wenhao Liu and
Nazneen Fatema Rajani},
title = {MoFE: Mixture of Factual Experts for Controlling Hallucinations in
Abstractive Summarization},
journal = {CoRR},
volume = {abs/2110.07166},
year = {2021},
url = {https://arxiv.org/abs/2110.07166},
eprinttype = {arXiv},
eprint = {2110.07166},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-07166.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
AngelUrq/q-Taxi-v3 | AngelUrq | 2022-06-14T04:09:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-14T04:09:02Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.40 +/- 2.79
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
eslamxm/AraT5-base-finetune-ar-wikilingua | eslamxm | 2022-06-14T02:30:20Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"ar",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-13T19:22:56Z | ---
tags:
- summarization
- ar
- Abstractive Summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: AraT5-base-finetune-ar-wikilingua
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraT5-base-finetune-ar-wikilingua
This model is a fine-tuned version of [UBC-NLP/AraT5-base](https://huggingface.co/UBC-NLP/AraT5-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6110
- Rouge-1: 19.97
- Rouge-2: 6.9
- Rouge-l: 18.25
- Gen Len: 18.45
- Bertscore: 69.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 11.5412 | 1.0 | 312 | 6.8825 | 5.2 | 0.69 | 5.04 | 19.0 | 63.2 |
| 6.5212 | 2.0 | 624 | 5.8992 | 8.89 | 1.4 | 8.36 | 17.92 | 63.9 |
| 5.8302 | 3.0 | 936 | 5.3712 | 9.99 | 2.21 | 9.54 | 15.87 | 65.08 |
| 5.406 | 4.0 | 1248 | 5.0632 | 13.94 | 3.5 | 13.0 | 15.95 | 66.83 |
| 5.1109 | 5.0 | 1560 | 4.8718 | 15.28 | 4.34 | 14.27 | 18.26 | 66.83 |
| 4.9004 | 6.0 | 1872 | 4.7631 | 16.65 | 4.92 | 15.46 | 17.73 | 67.75 |
| 4.754 | 7.0 | 2184 | 4.6920 | 18.31 | 5.79 | 16.9 | 18.17 | 68.55 |
| 4.6369 | 8.0 | 2496 | 4.6459 | 18.6 | 6.12 | 17.16 | 18.16 | 68.66 |
| 4.5595 | 9.0 | 2808 | 4.6153 | 18.94 | 6.1 | 17.39 | 17.82 | 68.99 |
| 4.4967 | 10.0 | 3120 | 4.6110 | 19.15 | 6.25 | 17.55 | 17.91 | 69.09 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
nestoralvaro/mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2 | nestoralvaro | 2022-06-14T02:06:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-13T17:15:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-xsum-RAW_data_prep_2021_12_26___t22027_162754.csv__g_mt5_base_L2
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0127
- Rouge2: 0.0
- Rougel: 0.0128
- Rougelsum: 0.0129
- Gen Len: 6.329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 131773 | nan | 0.0127 | 0.0 | 0.0128 | 0.0129 | 6.329 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
huggingtweets/honiemun | huggingtweets | 2022-06-13T23:11:55Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-06-13T23:11:47Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1509372264424296448/HVPI1lQu_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">𝘏𝘰𝘯𝘪𝘦 ♡</div>
<div style="text-align: center; font-size: 14px;">@honiemun</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 𝘏𝘰𝘯𝘪𝘦 ♡.
| Data | 𝘏𝘰𝘯𝘪𝘦 ♡ |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 231 |
| Short tweets | 381 |
| Tweets kept | 2595 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/teqt0sk7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @honiemun's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bqoay71) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bqoay71/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/honiemun')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
TeamHaltmannSusanaHWCEO/DALL-X-1.0A | TeamHaltmannSusanaHWCEO | 2022-06-13T22:49:44Z | 0 | 0 | null | [
"region:us"
] | null | 2022-06-13T22:48:18Z | from gpt2_client import *
gpt2 = GPT2()
streamlit_code_base = gpt2.generate(
prompt="Enter prompt here",
temperature=0.7,
top_p=0.9,
nsamples=1,
batch_size=1,
length=1000,
include_prefix=True
)
print(streamlit_code_base) |
evangeloc/t5-small-finetuned-xsum_3epoch_batch8 | evangeloc | 2022-06-13T22:46:59Z | 5 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-06-12T15:13:36Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: evangeloc/t5-small-finetuned-xsum_3epoch_batch8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# evangeloc/t5-small-finetuned-xsum_3epoch_batch8
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5178
- Validation Loss: 2.3002
- Train Rouge1: 31.6237
- Train Rouge2: 10.4288
- Train Rougel: 25.3564
- Train Rougelsum: 25.3203
- Train Gen Len: 18.86
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.7208 | 2.4024 | 30.3441 | 9.9367 | 24.4023 | 24.4171 | 18.83 | 0 |
| 2.5818 | 2.3390 | 30.5249 | 9.9161 | 24.1981 | 24.2080 | 18.825 | 1 |
| 2.5178 | 2.3002 | 31.6237 | 10.4288 | 25.3564 | 25.3203 | 18.86 | 2 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
laboyle1/distilbert-finetuned | laboyle1 | 2022-06-13T21:03:49Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-13T14:38:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2103 | 1.0 | 10024 | 2.0834 |
| 2.1146 | 2.0 | 20048 | 2.0387 |
| 2.0721 | 3.0 | 30072 | 2.0095 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
simecek/DNAPerceiver1_2epochs | simecek | 2022-06-13T20:40:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"perceiver",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-06-10T11:38:17Z | ---
tags:
- generated_from_trainer
model-index:
- name: DNAPerceiver1_2epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DNAPerceiver1_2epochs
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 36000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3597 | 0.3 | 6000 | 1.3565 |
| 1.3566 | 0.6 | 12000 | 1.3557 |
| 1.3514 | 0.89 | 18000 | 1.3474 |
| 1.345 | 1.19 | 24000 | 1.3410 |
| 1.3386 | 1.49 | 30000 | 1.3357 |
| 1.3348 | 1.79 | 36000 | 1.3330 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Andrey1989/mbert-finetuned-ner | Andrey1989 | 2022-06-13T19:46:59Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mbert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: lv
metrics:
- name: Precision
type: precision
value: 0.9304986338797814
- name: Recall
type: recall
value: 0.9375430144528561
- name: F1
type: f1
value: 0.9340075419952005
- name: Accuracy
type: accuracy
value: 0.9699674740348558
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1264
- Precision: 0.9305
- Recall: 0.9375
- F1: 0.9340
- Accuracy: 0.9700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.301 | 1.0 | 625 | 0.1756 | 0.8843 | 0.9067 | 0.8953 | 0.9500 |
| 0.1259 | 2.0 | 1250 | 0.1248 | 0.9285 | 0.9335 | 0.9310 | 0.9688 |
| 0.0895 | 3.0 | 1875 | 0.1264 | 0.9305 | 0.9375 | 0.9340 | 0.9700 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
caio13/dalle-mono | caio13 | 2022-06-13T19:37:40Z | 0 | 0 | null | [
"arxiv:2102.08981",
"arxiv:2012.09841",
"arxiv:1910.13461",
"arxiv:1910.09700",
"region:us"
] | null | 2022-06-13T19:27:12Z | # DALL·E Mini Model Card
This dont is a copy, credits for https://huggingface.co/dalle-mini/dalle-mini/tree/main
This model card focuses on the model associated with the DALL·E mini space on Hugging Face, available [here](https://huggingface.co/spaces/dalle-mini/dalle-mini). The app is called “dalle-mini”, but incorporates “[DALL·E Mini](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy)’’ and “[DALL·E Mega](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training-Journal--VmlldzoxODMxMDI2)” models (further details on this distinction forthcoming).
## Model Details
* **Developed by:** Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê, Luke, Luke Melas, Ritobrata Ghosh
* **Modified by:** Caio13m
* **Model type:** Transformer-based text-to-image generation model
* **Language(s):** English
* **License:** Apache 2.0
* **Model Description:** This is a model that can be used to generate images based on text prompts. As the model developers wrote in the [project report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy) about DALL·E mini, “OpenAI had the first impressive model for generating images with [DALL·E](https://openai.com/blog/dall-e/). DALL·E mini is an attempt at reproducing those results with an open-source model.”
* **Resources for more information:** See OpenAI’s website for more information about [DALL·E](https://openai.com/blog/dall-e/), including the [DALL·E model card](https://github.com/openai/DALL-E/blob/master/model_card.md). See the [project report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy) for more information from the model’s developers. To learn more about DALL·E Mega, see the DALL·E Mega [training journal](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training--VmlldzoxODMxMDI2#training-parameters).
* **Cite as:**
```bib text
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
```
## Uses
### Direct Use
The model is intended to be used to generate images based on text prompts for research and personal consumption. Intended uses include supporting creativity, creating humorous content, and providing generations for people curious about the model’s behavior. Intended uses exclude those described in the [Misuse and Out-of-Scope Use](#misuse-malicious-use-and-out-of-scope-use) section.
### Downstream Use
The model could also be used for downstream use cases, including:
* Research efforts, such as probing and better understanding the limitations and biases of generative models to further improve the state of science
* Development of educational or creative tools
* Generation of artwork and use in design and artistic processes.
* Other uses that are newly discovered by users. This currently includes poetry illustration (give a poem as prompt), fan art (putting a character in various other visual universes), visual puns, fairy tale illustrations (give a fantasy situation as prompt), concept mashups (applying a texture to something completely different), style transfers (portraits in the style of), … We hope you will find your own application!
Downstream uses exclude the uses described in [Misuse and Out-of-Scope Use](#misuse-malicious-use-and-out-of-scope-use).
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes:
* Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
* Intentionally promoting or propagating discriminatory content or harmful stereotypes.
* Impersonating individuals without their consent.
* Sexual content without consent of the people who might see it.
* Mis- and disinformation
* Representations of egregious violence and gore
* Sharing of copyrighted or licensed material in violation of its terms of use.
* Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
The model developers discuss the limitations of the model further in the DALL·E Mini [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA):
* Faces and people in general are not generated properly.
* Animals are usually unrealistic.
* It is hard to predict where the model excels or falls short…Good prompt engineering will lead to the best results.
* The model has only been trained with English descriptions and will not perform as well in other languages
### Bias
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
The model was trained on unfiltered data from the Internet, limited to pictures with English descriptions. Text and images from communities and cultures using other languages were not utilized. This affects all output of the model, with white and Western culture asserted as a default, and the model’s ability to generate content using non-English prompts is observably lower quality than prompts in English.
While the capabilities of image generation models are impressive, they may also reinforce or exacerbate societal biases. The extent and nature of the biases of DALL·E Mini and DALL·E Mega models have yet to be fully documented, but initial testing demonstrates that they may generate images that contain negative stereotypes against minoritized groups. Work to analyze the nature and extent of the models’ biases and limitations is ongoing.
Our current analyses demonstrate that:
* Images generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
* When the model generates images with people in them, it tends to output people who we perceive to be white, while people of color are underrepresented.
* Images generated by the model can contain biased content that depicts power differentials between people of color and people who are white, with white people in positions of privilege.
* The model is generally only usable for generating images based on text in English, limiting accessibility of the model for non-English speakers and potentially contributing to the biases in images generated by the model.
The [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA) discusses these issues in more detail, and also highlights potential sources of bias in the model development process.
### Limitations and Bias Recommendations
* Users (both direct and downstream) should be made aware of the biases and limitations.
* Content that is potentially problematic should be filtered out, e.g., via automated models that detect violence or pornography.
* Further work on this model should include methods for balanced and just representations of people and cultures, for example, by curating the training dataset to be both diverse and inclusive.
## Training
### Training Data
The model developers used 3 datasets for the model:
* [Conceptual Captions Dataset](https://aclanthology.org/P18-1238/), which contains 3 million image and caption pairs.
* [Conceptual 12M](https://arxiv.org/abs/2102.08981), which contains 12 million image and caption pairs.
* The [OpenAI subset](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md) of [YFCC100M](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/), which contains about 15 million images and that we further sub-sampled to 2 million images due to limitations in storage space. They used both title and description as caption and removed html tags, new lines and extra spaces.
For fine-tuning the image encoder, a subset of 2 million images were used.
All images (about 15 million) were used for training the Seq2Seq model.
### Training Procedure
As described further in the [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA#our-dall-e-model-architecture) for DALL·E Mini, during training, images and descriptions are both available and pass through the system as follows:
* Images are encoded through a [VQGAN](https://arxiv.org/abs/2012.09841) encoder, which turns images into a sequence of tokens.
* Descriptions are encoded through a [BART](https://arxiv.org/abs/1910.13461) encoder.
* The output of the BART encoder and encoded images are fed through the BART decoder, which is an auto-regressive model whose goal is to predict the next token.
* Loss is the [softmax cross-entropy](https://wandb.ai/sauravm/Activation-Functions/reports/Activation-Functions-Softmax--VmlldzoxNDU1Njgy#%F0%9F%93%A2-softmax-+-cross-entropy-loss-(caution:-math-alert)) between the model prediction logits and the actual image encodings from the VQGAN.
The simplified training procedure for DALL·E Mega is as follows:
* **Hardware:** 1 pod TPU v3-256 = 32 nodes of TPU VM v3-8 (8 TPU per node) = 256 TPU v3
* **Optimizer:** Distributed Shampoo
* **Model Partition Specificiations:** 8 model parallel x 32 data parallel
* **Batch:** 44 samples per model x 32 data parallel x 3 gradient accumulation steps = 4224 increasing samples per update
* **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant until plateau
* Gradient checkpointing used on each Encoder/Decoder layer (ie, MHA + FFN)
* Distributed Shampoo + Normformer Optimizations have proved to be effective and efficiently scaling this model.
* It should also be noted that the learning rate and other parameters are sometimes adjusted on the fly, and batch size increased over time as well.
There is more information about the full procedure and technical material in the DALL·E Mega [training journal](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mega-Training--VmlldzoxODMxMDI2#training-parameters).
## Evaluation Results
The model developers discuss their results extensively in their [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained-with-Demo--Vmlldzo4NjIxODA#the-results-of-our-dall-e-experiment) for DALL·E Mini, which provides comparisons between DALL·E Mini’s results with [DALL·E-pytorch](https://github.com/lucidrains/DALLE-pytorch), OpenAI’s [DALL·E](https://openai.com/blog/dall-e/), and models consisting of a generator coupled with the [CLIP neural network model](https://openai.com/blog/clip/).
For evaluation results related to DALL·E Mega, see this [technical report](https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy).
## Environmental Impact
### DALL·E Mini Estimated Emissions
*The model is 27 times smaller than the original DALL·E and was trained on a single TPU v3-8 for only 3 days.*
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
* **Hardware Type:** TPU v3-8
* **Hours used:** 72 (3 days)
* **Cloud Provider:** GCP (as mentioned in the technical report)
* **Compute Region:** us-east1 (provided by model developers)
* **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 7.54 kg CO2 eq.
### DALL·E Mega Estimated Emissions
DALL·E Mega is still training. So far, as on June 9, 2022, the model developers report that DALL·E Mega has been training for about 40-45 days on a TPU v3-256. Using those numbers, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
* **Hardware Type:** TPU v3-256
* **Hours used:** 960 - 1080 hours (40-45 days)
* **Cloud Provider:** Unknown
* **Compute Region:** Unknown
* **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** Unknown
## Citation
```bibtext
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
```
*This model card was written by: Boris Dayma, Margaret Mitchell, Ezi Ozoani, Marissa Gerchick, Irene Solaiman, Clémentine Fourrier, Sasha Luccioni, Emily Witko, Nazneen Rajani, and Julian Herrera.* |
QuentinKemperino/ECHR_test_Merged | QuentinKemperino | 2022-06-13T19:29:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:lex_glue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-06-13T11:25:32Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: ECHR_test_Merged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECHR_test_Merged
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Macro-f1: 0.5607
- Micro-f1: 0.6726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2278 | 0.44 | 500 | 0.3196 | 0.2394 | 0.4569 |
| 0.1891 | 0.89 | 1000 | 0.2827 | 0.3255 | 0.5112 |
| 0.1803 | 1.33 | 1500 | 0.2603 | 0.3961 | 0.5698 |
| 0.1676 | 1.78 | 2000 | 0.2590 | 0.4251 | 0.6003 |
| 0.1635 | 2.22 | 2500 | 0.2489 | 0.4186 | 0.6030 |
| 0.1784 | 2.67 | 3000 | 0.2445 | 0.4627 | 0.6159 |
| 0.1556 | 3.11 | 3500 | 0.2398 | 0.4757 | 0.6170 |
| 0.151 | 3.56 | 4000 | 0.2489 | 0.4725 | 0.6163 |
| 0.1564 | 4.0 | 4500 | 0.2289 | 0.5019 | 0.6416 |
| 0.1544 | 4.44 | 5000 | 0.2406 | 0.5013 | 0.6408 |
| 0.1516 | 4.89 | 5500 | 0.2351 | 0.5145 | 0.6510 |
| 0.1487 | 5.33 | 6000 | 0.2354 | 0.5164 | 0.6394 |
| 0.1385 | 5.78 | 6500 | 0.2385 | 0.5205 | 0.6486 |
| 0.145 | 6.22 | 7000 | 0.2337 | 0.5197 | 0.6529 |
| 0.1332 | 6.67 | 7500 | 0.2294 | 0.5421 | 0.6526 |
| 0.1293 | 7.11 | 8000 | 0.2167 | 0.5576 | 0.6652 |
| 0.1475 | 7.56 | 8500 | 0.2218 | 0.5676 | 0.6649 |
| 0.1376 | 8.0 | 9000 | 0.2203 | 0.5565 | 0.6709 |
| 0.1408 | 8.44 | 9500 | 0.2178 | 0.5541 | 0.6716 |
| 0.133 | 8.89 | 10000 | 0.2212 | 0.5692 | 0.6640 |
| 0.1363 | 9.33 | 10500 | 0.2148 | 0.5642 | 0.6736 |
| 0.1344 | 9.78 | 11000 | 0.2162 | 0.5607 | 0.6726 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ahmeddbahaa/mT5_multilingual_XLSum-finetune-ar-xlsum | ahmeddbahaa | 2022-06-13T19:20:20Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"mT5_multilingual_XLSum",
"abstractive summarization",
"ar",
"xlsum",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-06-11T19:48:24Z | ---
tags:
- summarization
- mT5_multilingual_XLSum
- mt5
- abstractive summarization
- ar
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mT5_multilingual_XLSum-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetune-ar-xlsum
This model is a fine-tuned version of [csebuetnlp/mT5_multilingual_XLSum](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2497
- Rouge-1: 32.52
- Rouge-2: 14.71
- Rouge-l: 27.88
- Gen Len: 41.45
- Bertscore: 74.65
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 3.5465 | 1.0 | 585 | 3.3215 | 30.09 | 13.23 | 26.07 | 36.31 | 73.97 |
| 3.3564 | 2.0 | 1170 | 3.2547 | 31.29 | 13.93 | 26.75 | 41.68 | 74.22 |
| 3.2185 | 3.0 | 1755 | 3.2421 | 31.78 | 14.1 | 27.07 | 41.64 | 74.4 |
| 3.1145 | 4.0 | 2340 | 3.2241 | 31.98 | 14.38 | 27.51 | 40.29 | 74.46 |
| 3.031 | 5.0 | 2925 | 3.2313 | 32.3 | 14.67 | 27.83 | 39.81 | 74.61 |
| 2.9627 | 6.0 | 3510 | 3.2348 | 32.39 | 14.65 | 27.76 | 40.02 | 74.6 |
| 2.9088 | 7.0 | 4095 | 3.2439 | 32.5 | 14.66 | 27.81 | 41.2 | 74.65 |
| 2.8649 | 8.0 | 4680 | 3.2497 | 32.52 | 14.71 | 27.88 | 41.45 | 74.65 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Subsets and Splits