modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-12 06:28:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 517
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-12 06:24:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
alisonbrwn/ppo-LunarLander_doubled_steps_wyth_hptune | alisonbrwn | 2022-05-12T11:42:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-12T11:42:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 253.43 +/- 10.79
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Sumedha/distilbert-base-uncased-finetuned-imdb | Sumedha | 2022-05-12T11:10:45Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-05-12T09:07:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4884 |
| 2.5761 | 2.0 | 314 | 2.4230 |
| 2.5255 | 3.0 | 471 | 2.4356 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.0
- Tokenizers 0.11.0
|
DioLiu/distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle | DioLiu | 2022-05-12T11:04:41Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-12T08:35:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-shake-wiki-update-shuffle
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0284
- Accuracy: 0.9971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0166 | 1.0 | 7783 | 0.0135 | 0.9965 |
| 0.0091 | 2.0 | 15566 | 0.0172 | 0.9968 |
| 0.0059 | 3.0 | 23349 | 0.0223 | 0.9968 |
| 0.0 | 4.0 | 31132 | 0.0332 | 0.9962 |
| 0.0001 | 5.0 | 38915 | 0.0284 | 0.9971 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
madatnlp/prefix-ket5-scratch | madatnlp | 2022-05-12T09:23:55Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-12T07:49:21Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/prefix-ket5-scratch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/prefix-ket5-scratch
This model is a fine-tuned version of [madatnlp/ke-t5-math-py](https://huggingface.co/madatnlp/ke-t5-math-py) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7214
- Validation Loss: 0.8747
- Epoch: 98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.0101 | 5.1280 | 0 |
| 4.8040 | 3.6005 | 1 |
| 3.7550 | 2.8108 | 2 |
| 3.2740 | 2.6402 | 3 |
| 2.9682 | 2.3173 | 4 |
| 2.6871 | 2.1585 | 5 |
| 2.4782 | 2.0828 | 6 |
| 2.3507 | 1.9557 | 7 |
| 2.2131 | 1.8513 | 8 |
| 2.1235 | 1.6324 | 9 |
| 2.0157 | 1.6270 | 10 |
| 1.9722 | 1.6217 | 11 |
| 1.8733 | 1.5436 | 12 |
| 1.8680 | 1.5872 | 13 |
| 1.8365 | 1.6040 | 14 |
| 1.7528 | 1.5049 | 15 |
| 1.7411 | 1.4754 | 16 |
| 1.6733 | 1.4409 | 17 |
| 1.6544 | 1.4230 | 18 |
| 1.6271 | 1.4556 | 19 |
| 1.5658 | 1.3797 | 20 |
| 1.5774 | 1.3269 | 21 |
| 1.5150 | 1.3108 | 22 |
| 1.5057 | 1.3785 | 23 |
| 1.4605 | 1.3114 | 24 |
| 1.4702 | 1.2618 | 25 |
| 1.4220 | 1.2164 | 26 |
| 1.4194 | 1.2409 | 27 |
| 1.3942 | 1.2603 | 28 |
| 1.3921 | 1.3010 | 29 |
| 1.3645 | 1.1850 | 30 |
| 1.3336 | 1.1273 | 31 |
| 1.3499 | 1.1533 | 32 |
| 1.3022 | 1.1683 | 33 |
| 1.2990 | 1.1403 | 34 |
| 1.2876 | 1.1241 | 35 |
| 1.2479 | 1.0957 | 36 |
| 1.2441 | 1.1989 | 37 |
| 1.2464 | 1.1416 | 38 |
| 1.2353 | 1.0636 | 39 |
| 1.2152 | 1.1136 | 40 |
| 1.2212 | 1.0635 | 41 |
| 1.1892 | 1.0818 | 42 |
| 1.1959 | 1.1041 | 43 |
| 1.1957 | 1.0912 | 44 |
| 1.1542 | 1.0949 | 45 |
| 1.1403 | 1.1272 | 46 |
| 1.1396 | 1.1169 | 47 |
| 1.1149 | 1.0606 | 48 |
| 1.1238 | 1.0610 | 49 |
| 1.1246 | 1.0234 | 50 |
| 1.0971 | 0.9865 | 51 |
| 1.0883 | 1.0568 | 52 |
| 1.0774 | 1.0099 | 53 |
| 1.0581 | 1.0023 | 54 |
| 1.0680 | 1.0197 | 55 |
| 1.0682 | 0.9835 | 56 |
| 1.0390 | 0.9789 | 57 |
| 1.0480 | 1.0217 | 58 |
| 1.0273 | 0.9622 | 59 |
| 1.0062 | 1.0174 | 60 |
| 1.0088 | 0.9612 | 61 |
| 0.9909 | 0.9998 | 62 |
| 0.9821 | 1.0115 | 63 |
| 0.9752 | 0.9712 | 64 |
| 0.9816 | 0.9677 | 65 |
| 0.9569 | 0.9503 | 66 |
| 0.9521 | 1.0052 | 67 |
| 0.9384 | 0.9752 | 68 |
| 0.9468 | 0.9767 | 69 |
| 0.9241 | 1.0076 | 70 |
| 0.9211 | 0.9414 | 71 |
| 0.9166 | 1.0294 | 72 |
| 0.9044 | 0.9772 | 73 |
| 0.9025 | 0.9273 | 74 |
| 0.8909 | 1.0077 | 75 |
| 0.8831 | 0.9292 | 76 |
| 0.8702 | 0.9320 | 77 |
| 0.8644 | 0.9879 | 78 |
| 0.8599 | 0.9027 | 79 |
| 0.8434 | 0.9197 | 80 |
| 0.8561 | 0.9447 | 81 |
| 0.8330 | 0.9730 | 82 |
| 0.8328 | 0.9137 | 83 |
| 0.8221 | 0.9232 | 84 |
| 0.8166 | 0.9115 | 85 |
| 0.8025 | 0.9530 | 86 |
| 0.8070 | 0.9270 | 87 |
| 0.7968 | 0.8474 | 88 |
| 0.7880 | 0.9171 | 89 |
| 0.7834 | 0.8668 | 90 |
| 0.7786 | 0.9049 | 91 |
| 0.7595 | 0.9348 | 92 |
| 0.7573 | 0.8826 | 93 |
| 0.7505 | 0.8765 | 94 |
| 0.7474 | 0.9312 | 95 |
| 0.7386 | 0.9211 | 96 |
| 0.7490 | 0.9223 | 97 |
| 0.7214 | 0.8747 | 98 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
eslamxm/mt5-base-finetuned-urdu-arabic | eslamxm | 2022-05-12T09:18:16Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"arabic",
"ar",
"Abstractive Summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-05-12T01:15:19Z | ---
license: apache-2.0
tags:
- summarization
- arabic
- ar
- mt5
- Abstractive Summarization
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: mt5-base-finetuned-urdu-finetuned-urdu-arabic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-finetuned-urdu-finetuned-urdu-arabic
This model is a fine-tuned version of [eslamxm/mt5-base-finetuned-urdu](https://huggingface.co/eslamxm/mt5-base-finetuned-urdu) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3744
- Rouge-1: 22.77
- Rouge-2: 10.15
- Rouge-l: 20.71
- Gen Len: 19.0
- Bertscore: 71.46
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 4.5155 | 1.0 | 1172 | 3.6895 | 18.81 | 6.77 | 17.01 | 19.0 | 70.27 |
| 3.8315 | 2.0 | 2344 | 3.5047 | 19.75 | 7.79 | 17.95 | 19.0 | 70.58 |
| 3.6122 | 3.0 | 3516 | 3.4231 | 20.46 | 8.44 | 18.7 | 19.0 | 70.8 |
| 3.4735 | 4.0 | 4688 | 3.3835 | 21.12 | 8.86 | 19.21 | 19.0 | 70.98 |
| 3.3855 | 5.0 | 5860 | 3.3744 | 21.48 | 9.01 | 19.57 | 19.0 | 71.17 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
uhlenbeckmew/distilroberta-base-wiki | uhlenbeckmew | 2022-05-12T07:51:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-05-12T07:02:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-wiki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-wiki
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4333 | 1.0 | 1223 | 2.1885 |
| 2.3107 | 2.0 | 2446 | 2.1508 |
| 2.2385 | 3.0 | 3669 | 2.0961 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
cocoshe/bert-base-chinese-finetune-5-trash-email | cocoshe | 2022-05-12T07:35:56Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-05-12T07:25:12Z | ---
language: zh
---
# Based on bert-base-chinese
基于bert-base-chinese在`message80W`数据集(垃圾邮件二分类)上做了5个epoch的fine-tune
```python
# evaluate
with torch.no_grad():
model.eval()
eval_steps = 0
pred_list = []
label_list = []
for i, batch in enumerate(tqdm(test_loader)):
input_ids, attention_mask, label = batch
logits = model(input_ids, attention_mask)
pred_list += (torch.argmax(logits, dim=-1))
label_list += label
eval_steps += 1
```
80W数据,shuffled,8:3分train eval
下面是eval结果

|
yogeshchandrasekharuni/t5-small-finetuned-xsum | yogeshchandrasekharuni | 2022-05-12T07:34:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-12T06:56:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 16 | 2.3636 | 60.9559 | 47.1972 | 58.7384 | 59.5004 | 18.082 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
iis2009002/xlm-roberta-base-finetuned-panx-all | iis2009002 | 2022-05-12T07:17:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-04T11:40:11Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1752
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3 | 1.0 | 835 | 0.1862 | 0.8114 |
| 0.1552 | 2.0 | 1670 | 0.1758 | 0.8426 |
| 0.1002 | 3.0 | 2505 | 0.1752 | 0.8557 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sagerpascal/bert-finetuned-ner | sagerpascal | 2022-05-12T07:11:31Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-12T06:30:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9349014411131357
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9423157191752232
- name: Accuracy
type: accuracy
value: 0.9858862659680933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0646
- Precision: 0.9349
- Recall: 0.9498
- F1: 0.9423
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0834 | 1.0 | 1756 | 0.0686 | 0.9140 | 0.9354 | 0.9246 | 0.9825 |
| 0.0421 | 2.0 | 3512 | 0.0596 | 0.9205 | 0.9472 | 0.9336 | 0.9849 |
| 0.0183 | 3.0 | 5268 | 0.0646 | 0.9349 | 0.9498 | 0.9423 | 0.9859 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
iis2009002/xlm-roberta-base-finetuned-panx-it | iis2009002 | 2022-05-12T07:07:41Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-04T11:06:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8247845711940912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2421
- F1: 0.8248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.809 | 1.0 | 70 | 0.3380 | 0.7183 |
| 0.2939 | 2.0 | 140 | 0.2582 | 0.7977 |
| 0.1813 | 3.0 | 210 | 0.2421 | 0.8248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Vnven25/en_pipeline | Vnven25 | 2022-05-12T06:49:36Z | 4 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
]
| token-classification | 2022-05-11T17:14:48Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 1.0
- name: NER Recall
type: recall
value: 1.0
- name: NER F Score
type: f_score
value: 1.0
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.3,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
##NE
<details>
<summary>View label scheme (6 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `COMPANY NAME`, `CONTRACT`, `EMAIL`, `EVENT`, `MODULE`, `NAME` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TOK2VEC_LOSS` | 6689.73 |
| `NER_LOSS` | 483.71 | |
guhuawuli/gpt2-poem_key_words | guhuawuli | 2022-05-12T06:28:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-12T01:51:28Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-poem_key_words
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-poem_key_words
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9544 | 1.0 | 670 | 2.6296 |
| 2.7014 | 2.0 | 1340 | 2.5557 |
| 2.6035 | 3.0 | 2010 | 2.5370 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+3fd9dcf
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/ladygaga | huggingtweets | 2022-05-12T06:03:03Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ladygaga/1652335378479/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1519346609125003264/rekKHZUq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lady Gaga</div>
<div style="text-align: center; font-size: 14px;">@ladygaga</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lady Gaga.
| Data | Lady Gaga |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 617 |
| Short tweets | 330 |
| Tweets kept | 2231 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27nvqv2x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ladygaga's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3a6dln4v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3a6dln4v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ladygaga')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vanichandna/indic-bert-finetuned-squad | vanichandna | 2022-05-12T05:16:13Z | 4 | 0 | transformers | [
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-10T20:11:36Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: vanichandna/indic-bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vanichandna/indic-bert-finetuned-squad
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0802
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 21984, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.8468 | 0 |
| 1.4510 | 1 |
| 1.2435 | 2 |
| 1.0802 | 3 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.0
- Tokenizers 0.12.1
|
eduardopds/mt5-small-finetuned-amazon-en-es | eduardopds | 2022-05-12T01:32:02Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-12T00:39:53Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: eduardopds/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# eduardopds/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0870
- Validation Loss: 3.3925
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.8646 | 4.3778 | 0 |
| 5.9307 | 3.8057 | 1 |
| 5.1494 | 3.6458 | 2 |
| 4.7430 | 3.5501 | 3 |
| 4.4782 | 3.4870 | 4 |
| 4.2922 | 3.4339 | 5 |
| 4.1536 | 3.4037 | 6 |
| 4.0870 | 3.3925 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
tjscollins/ppo-LunarLander-v2-tuned | tjscollins | 2022-05-12T01:11:50Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-12T01:07:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 292.67 +/- 15.30
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
pirchavez/PPO-FirstModel | pirchavez | 2022-05-12T00:28:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-12T00:26:51Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -136.25 +/- 22.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
salil-malhotra/test02-ppo-LunarLander-v2 | salil-malhotra | 2022-05-11T23:04:18Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T03:15:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 280.98 +/- 18.72
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
RaphaelReinauer/LunarLander-v10 | RaphaelReinauer | 2022-05-11T22:37:32Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T22:37:09Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 208.15 +/- 42.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
alk/mt5-small-mt5-small-finetuned-billsum-en-es | alk | 2022-05-11T22:05:52Z | 4 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T18:40:38Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: alk/mt5-small-mt5-small-finetuned-billsum-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alk/mt5-small-mt5-small-finetuned-billsum-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1897
- Validation Loss: 1.0147
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 18944, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.3673 | 1.7982 | 0 |
| 2.2571 | 1.4674 | 1 |
| 1.8047 | 1.2942 | 2 |
| 1.5579 | 1.1585 | 3 |
| 1.3863 | 1.0762 | 4 |
| 1.2786 | 1.0284 | 5 |
| 1.2162 | 1.0217 | 6 |
| 1.1897 | 1.0147 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
huxxx657/roberta-base-finetuned-deletion-squad-15 | huxxx657 | 2022-05-11T21:15:16Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-11T20:04:40Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-deletion-squad-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-deletion-squad-15
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1127 | 1.0 | 5531 | 1.1057 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
RebeccaJeffers/ppo-LunarLander-v2 | RebeccaJeffers | 2022-05-11T21:06:10Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T21:02:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 231.12 +/- 22.15
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
A2/kogpt2-taf | A2 | 2022-05-11T21:01:45Z | 6 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-04-28T05:45:19Z | ---
license: apache-2.0
---
Grepp KDT AI 3기 과정 프로젝트.
[SKT-AI/KoGPT2](https://github.com/SKT-AI/KoGPT2) 모델을 기반. 모두의 말뭉치의 2021 뉴스 말뭉치를 추가로 언어모델링 학습 후, 5대 일간지(조선일보, 중앙일보, 동아일보, 한겨레, 경향신문)별 각 만여개의 사설로 미세조정하였음.
매일 백여개의 사설로 추가 미세조정하여 최신 정치적 이슈에 관한 텍스트도 잘 생성함.
|
chrishistewandb/hugging-face | chrishistewandb | 2022-05-11T19:49:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-06T21:45:03Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hugging-face
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hugging-face
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
RaphaelReinauer/LunarLander-v7 | RaphaelReinauer | 2022-05-11T19:31:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T19:30:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PP0
results:
- metrics:
- type: mean_reward
value: 147.27 +/- 83.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PP0** Agent playing **LunarLander-v2**
This is a trained model of a **PP0** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
DBusAI/DQN-MountainCar-v0 | DBusAI | 2022-05-11T18:53:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T18:21:27Z | ---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -100.20 +/- 8.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **DQN** Agent playing **MountainCar-v0**
This is a trained model of a **DQN** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
mcurmei/single_label_N_max_long_training | mcurmei | 2022-05-11T18:10:19Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-11T17:22:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: single_label_N_max_long_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# single_label_N_max_long_training
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0568 | 1.0 | 674 | 1.9993 |
| 1.6024 | 2.0 | 1348 | 1.8497 |
| 1.0196 | 3.0 | 2022 | 1.9178 |
| 0.7622 | 4.0 | 2696 | 2.0412 |
| 0.6066 | 5.0 | 3370 | 2.2523 |
| 0.4136 | 6.0 | 4044 | 2.3845 |
| 0.3113 | 7.0 | 4718 | 2.5712 |
| 0.2777 | 8.0 | 5392 | 2.6790 |
| 0.208 | 9.0 | 6066 | 2.7464 |
| 0.1749 | 10.0 | 6740 | 2.8288 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ceggian/sbert_pt_reddit_mnr_256 | ceggian | 2022-05-11T18:03:58Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-05-11T17:53:31Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
snowood1/ConfliBERT-scr-cased | snowood1 | 2022-05-11T16:53:30Z | 17 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-04-29T20:52:24Z | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
snowood1/ConfliBERT-scr-uncased | snowood1 | 2022-05-11T16:53:17Z | 183 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-04-29T21:00:32Z | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
snowood1/ConfliBERT-cont-cased | snowood1 | 2022-05-11T16:52:54Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-04-29T20:54:34Z | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/
|
snowood1/ConfliBERT-cont-uncased | snowood1 | 2022-05-11T16:49:05Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-04-29T21:01:06Z | ---
license: gpl-3.0
---
ConfliBERT is a pre-trained language model for political conflict and violence.
We provided four versions of ConfliBERT:
<ol>
<li>ConfliBERT-scr-uncased: Pretraining from scratch with our own uncased vocabulary (preferred)</li>
<li>ConfliBERT-scr-cased: Pretraining from scratch with our own cased vocabulary</li>
<li>ConfliBERT-cont-uncased: Continual pretraining with original BERT's uncased vocabulary</li>
<li>ConfliBERT-cont-cased: Continual pretraining with original BERT's cased vocabulary</li>
</ol>
See more details in https://github.com/eventdata/ConfliBERT/ |
kaeldric/TEST2ppo-LunarLander-v2 | kaeldric | 2022-05-11T16:48:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T16:48:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 246.63 +/- 20.18
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
subhasisj/hi-TAPT-MLM-MiniLM | subhasisj | 2022-05-11T16:44:42Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-05-11T13:30:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: hi-TAPT-MLM-MiniLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hi-TAPT-MLM-MiniLM
This model is a fine-tuned version of [subhasisj/MiniLMv2-qa-encoder](https://huggingface.co/subhasisj/MiniLMv2-qa-encoder) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
patrickvonplaten/opt_metaseq_350m | patrickvonplaten | 2022-05-11T16:08:26Z | 8 | 0 | transformers | [
"transformers",
"opt",
"feature-extraction",
"opt_metasq",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-05-11T08:35:10Z | ---
tags:
- opt_metasq
---
# This repo let's you run the following checkpoint using facebookresearch/metaseq.
Do the following:
## 1. Install PyTorch
```
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```
## 2. Install Megatron
```
git clone https://github.com/patrickvonplaten/Megatron-LM.git
cd Megatron-LM
pip3 install six regex
pip3 install -e .
```
## 3. Install fairscale
```
git clone https://github.com/facebookresearch/fairscale.git
cd fairscale
git checkout prefetch_fsdp_params_simple
pip3 install -e .
```
## 4. Install metaseq
```
git clone https://github.com/patrickvonplaten/metaseq.git
cd metaseq
pip3 install -e .
```
## 5. Clone this repo (click top right on "How to clone")
## 6. Run the following:
```bash
cd <path/to/cloned/repo>
bash run.sh
``` |
KenP/codeparrot-ds | KenP | 2022-05-11T15:04:32Z | 4 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-10T20:46:24Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: KenP/codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KenP/codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.3900
- Validation Loss: 9.6171
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -922, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.3900 | 9.6171 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.0
- Tokenizers 0.12.1
|
huggingtweets/alice_lbl-lotrbookquotes | huggingtweets | 2022-05-11T14:44:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-11T14:43:07Z | ---
language: en
thumbnail: http://www.huggingtweets.com/alice_lbl-lotrbookquotes/1652280261416/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1424546909104926720/g4pTa5BS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1047569624693465089/0yKYd-Xl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alice in Wonderland & Looking-Glass (line by line) & Lord of the Rings quotes</div>
<div style="text-align: center; font-size: 14px;">@alice_lbl-lotrbookquotes</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alice in Wonderland & Looking-Glass (line by line) & Lord of the Rings quotes.
| Data | Alice in Wonderland & Looking-Glass (line by line) | Lord of the Rings quotes |
| --- | --- | --- |
| Tweets downloaded | 3050 | 3250 |
| Retweets | 0 | 0 |
| Short tweets | 38 | 0 |
| Tweets kept | 3012 | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/14brvkjr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alice_lbl-lotrbookquotes's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/tzmzyo79) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/tzmzyo79/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alice_lbl-lotrbookquotes')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DBusAI/ppo-FrozenLake-v1 | DBusAI | 2022-05-11T14:19:43Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T14:19:20Z | ---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 0.80 +/- 0.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
---
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
pere/t5-parliament-categorisation | pere | 2022-05-11T14:14:10Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-04-04T14:46:19Z | ---
license: apache-2.0
---
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv | theojolliffe | 2022-05-11T13:55:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T07:49:27Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8065
- Rouge1: 54.5916
- Rouge2: 36.7817
- Rougel: 40.4708
- Rougelsum: 52.5754
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.2945 | 1.0 | 795 | 0.9555 | 51.91 | 32.0926 | 33.6727 | 49.5306 | 142.0 |
| 0.7153 | 2.0 | 1590 | 0.8317 | 52.4708 | 34.1035 | 35.2968 | 50.2966 | 141.963 |
| 0.5398 | 3.0 | 2385 | 0.8133 | 52.4603 | 33.497 | 36.4227 | 50.2513 | 141.8704 |
| 0.3568 | 4.0 | 3180 | 0.8091 | 52.3993 | 34.2424 | 37.7819 | 50.2069 | 142.0 |
| 0.2842 | 5.0 | 3975 | 0.8065 | 54.5916 | 36.7817 | 40.4708 | 52.5754 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
orenpereg/paraphrase-mpnet-base-v2_sst2_64samps | orenpereg | 2022-05-11T13:40:33Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-05-11T13:40:24Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# orenpereg/paraphrase-mpnet-base-v2_sst2_64samps
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('orenpereg/paraphrase-mpnet-base-v2_sst2_64samps')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('orenpereg/paraphrase-mpnet-base-v2_sst2_64samps')
model = AutoModel.from_pretrained('orenpereg/paraphrase-mpnet-base-v2_sst2_64samps')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=orenpereg/paraphrase-mpnet-base-v2_sst2_64samps)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ceggian/sbert_pt_reddit_mnr_512 | ceggian | 2022-05-11T13:33:48Z | 1 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-05-11T13:18:47Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 39289 with parameters:
```
{'batch_size': 8}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3928,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
orenpereg/paraphrase-mpnet-base-v2_sst2_4samps | orenpereg | 2022-05-11T13:32:25Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-05-11T13:32:16Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# orenpereg/paraphrase-mpnet-base-v2_sst2_4samps
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('orenpereg/paraphrase-mpnet-base-v2_sst2_4samps')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('orenpereg/paraphrase-mpnet-base-v2_sst2_4samps')
model = AutoModel.from_pretrained('orenpereg/paraphrase-mpnet-base-v2_sst2_4samps')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=orenpereg/paraphrase-mpnet-base-v2_sst2_4samps)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
redshift51/ab_LunarLander-v2_1 | redshift51 | 2022-05-11T13:20:12Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T13:19:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 25.46 +/- 125.09
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huggingartists/snoop-dogg | huggingartists | 2022-05-11T12:30:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/snoop-dogg",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
datasets:
- huggingartists/snoop-dogg
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/91bd22f5e53a3ea3cb1436de8f4a3722.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Snoop Dogg</div>
<a href="https://genius.com/artists/snoop-dogg">
<div style="text-align: center; font-size: 14px;">@snoop-dogg</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Snoop Dogg.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/snoop-dogg).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/snoop-dogg")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/xru6xdjl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Snoop Dogg's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1o72aoie) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1o72aoie/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/snoop-dogg')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/snoop-dogg")
model = AutoModelWithLMHead.from_pretrained("huggingartists/snoop-dogg")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
liujr1980/mmodels | liujr1980 | 2022-05-11T12:14:52Z | 4 | 0 | transformers | [
"transformers",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-08T03:01:46Z | ## my first model
fine-tuned from distillbert |
wvangils/DistilGPT2-Beatles-Lyrics-finetuned | wvangils | 2022-05-11T11:44:35Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-11T09:51:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilGPT2-Beatles-Lyrics-finetuned
results: []
widget:
- text: "Last night in Kiev the"
example_title: "Kiev"
- text: "It hasn't rained in weeks"
example_title: "Rain"
---
# DistilGPT2-Beatles-Lyrics-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [Huggingartists - beatles](https://huggingface.co/datasets/huggingartists/the-beatles) dataset. It will complete an input prompt with Beatles-like text.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.748 | 1.0 | 165 | 2.3732 |
| 2.4395 | 2.0 | 330 | 2.1938 |
| 2.2968 | 3.0 | 495 | 2.1118 |
| 2.2075 | 4.0 | 660 | 2.0721 |
| 2.1393 | 5.0 | 825 | 2.0571 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
Wanjiru/ag_based_ner | Wanjiru | 2022-05-11T11:41:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-11T07:51:14Z | Fine tuned recobo/agriculture-bert-uncased for custom NER entities. |
crow/ppo-LunarLander-v2 | crow | 2022-05-11T11:15:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T11:12:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 222.50 +/- 86.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
ankkarp/ppo-LunarLander-v2_v2 | ankkarp | 2022-05-11T10:15:12Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T10:14:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 236.25 +/- 8.86
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
meedan/paraphrase-filipino-mpnet-base-v2 | meedan | 2022-05-11T09:50:47Z | 76 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-04-04T18:06:35Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# paraphrase-filipino-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
This model was trained using the student--teacher approach outlined in [Reimers and Gurevych (2020)](https://aclanthology.org/2020.emnlp-main.365/).
The teacher model was [sentence-transformers/paraphrase-mpnet-base-v2](), and the student model was [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](), which is based on XLM-R. We trained the model for 2 epoch using a batch size of 64 on parallel data English--Tagalog and English--Filipino data from OPUS. We found the data to be of variable quality and filtered it to only include sentence pairs that the Compact Language Detection kit (CLDv3) identified reliably as being in Tagalog or Filipino. Other parameters were left unchanged from the example [make_multilingual_sys.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/multilingual/make_multilingual_sys.py) code in the sentence-transformers code base.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
from scipy.spatial import distance
import itertools
model = SentenceTransformer('meedan/paraphrase-filipino-mpnet-base-v2')
sentences = ["saan pong mga lugar available ang pfizer vaccine? Thank you!","Ask ko lang po saan meron available na vaccine","Where is the vaccine available?"]
embeddings = model.encode(sentences)
dist=[distance.cosine(i,j) for i,j in itertools.combinations(embeddings,2)]
print(dist)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
We machine translated the STS data from [SentEval](https://github.com/facebookresearch/SentEval) to Filipino using the Google Translation API and used this for evaluation alongside the original English-language STS data. We used Spearman's rank correlation coefficient. We found roughly the same performance as the original base model (sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on English while substantial gains were made for Filipino. For English, the average correlation is 0.80. For Filipino, it is 0.75.
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 79097 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
wesleywt/ppo-LunarLander-v2 | wesleywt | 2022-05-11T09:39:42Z | 9 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T07:26:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 291.52 +/- 22.96
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
LIA-AvignonUniversity/IWSLT2022-Niger-Mali | LIA-AvignonUniversity | 2022-05-11T09:31:51Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"arxiv:2201.05051",
"endpoints_compatible",
"region:us"
]
| null | 2022-04-04T16:13:17Z | ## Model and data descriptions
This is a wav2vec 2.0 base model trained on the Niger-Mali audio collection and on the Tamasheq-French speech corpus. These combined contained 111 hours of French, 109 hours of Fulfulde, 100 hours of Hausa, 243 hours of Tamasheq and 95 hours of Zarma.
These corpora were presented in [Boito et al., 2022](https://arxiv.org/abs/2201.05051).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations.
## Referencing our IWSLT models
```
@article{boito2022trac,
title={ON-TRAC Consortium Systems for the IWSLT 2022 Dialect and Low-resource Speech Translation Tasks},
author={Boito, Marcely Zanon and Ortega, John and Riguidel, Hugo and Laurent, Antoine and Barrault, Lo{\"\i}c and Bougares, Fethi and Chaabani, Firas and Nguyen, Ha and Barbier, Florentin and Gahbiche, Souhir and others},
journal={IWSLT},
year={2022}
}
``` |
prashanth/mbart-large-cc25-finetuned-hi-to-en | prashanth | 2022-05-11T08:57:01Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:hindi_english_machine_translation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-08T12:48:08Z | ---
tags:
- generated_from_trainer
datasets:
- hindi_english_machine_translation
model-index:
- name: mbart-large-cc25-finetuned-hi-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-large-cc25-finetuned-hi-to-en
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the hindi_english_machine_translation dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 1.18.0
- Tokenizers 0.12.1
|
fxmarty/donotdelete | fxmarty | 2022-05-11T08:51:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-05-11T08:51:40Z | Fixed parameters:
* **model_name_or_path**: `Bhumika/roberta-base-finetuned-sst2`
* **dataset**:
* **path**: `glue`
* **name**: `sst2`
* **calibration_split**: `None`
* **eval_split**: `validation`
* **data_keys**: `['sentence']`
* **label_keys**: `['label']`
* **quantization_approach**: `dynamic`
* **node_exclusion**: `[]`
* **per_channel**: `False`
* **calibration**: `None`
* **framework**: `onnxruntime`
* **framework_args**:
* **opset**: `15`
* **optimization_level**: `1`
* **aware_training**: `False`
Benchmarked parameters:
* **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']`
## Evaluation
Below, time metrics for
* Batch size: 8
* Input length: 128
| operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | accuracy (original) | accuracy (optimized) |
| :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | :-: | :-----------------: | :------------------: |
| `['Add']` | \| | 454.70 | 361.81 | \| | 2.50 | 3.00 | \| | 1.0 | 1.0 |
| `['Add', 'MatMul']` | \| | 474.54 | 135.14 | \| | 2.50 | 7.50 | \| | 1.0 | 1.0 |
|
GuillaumeSalouHF/slime-test | GuillaumeSalouHF | 2022-05-11T08:21:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-04-28T08:20:08Z |
Site Reliability Engineering
---
language: en
thumbnail: http://www.huggingtweets.com/slime_machine/1640253262516/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468034520326701062/LDp_yytu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rich homie cron</div>
<div style="text-align: center; font-size: 14px;">@slime_machine</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rich homie cron.
| Data | rich homie cron |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 590 |
| Short tweets | 494 |
| Tweets kept | 2150 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28uf2bgx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @slime_machine's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3h5ua6ik) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3h5ua6ik/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/slime_machine')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
IljaSamoilov/EstBERT-estonian-subtitles-token-classification | IljaSamoilov | 2022-05-11T08:13:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"et",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-05-10T18:53:58Z | ---
language:
- et
widget:
- text: "Et, et, et miks mitte olla siis tasakaalus, ma noh, hüpoteetiliselt viskan selle palli üles,"
- text: "te olete ka noh, noh, päris korralikult ka Rahvusringhäälingu teatud mõttes sellisesse keerulisse olukorda pannud,"
---
Importing the model and tokenizer:
```
tokenizer = AutoTokenizer.from_pretrained("IljaSamoilov/EstBERT-estonian-subtitles-token-classification")
model = AutoModelForTokenClassification.from_pretrained("IljaSamoilov/EstBERT-estonian-subtitles-token-classification")
``` |
IljaSamoilov/MBART-estonian-subtitles | IljaSamoilov | 2022-05-11T08:12:33Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"et",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T18:17:53Z | ---
language:
- et
widget:
- text: "te olete ka noh, noh, päris korralikult ka Rahvusringhäälingu teatud mõttes sellisesse keerulisse olukorda pannud,"
- text: "Et, et, et miks mitte olla siis tasakaalus, ma noh, hüpoteetiliselt viskan selle palli üles,"
---
Model usage:
```
tokenizer = MBart50Tokenizer.from_pretrained("IljaSamoilov/MBART-estonian-subtitles", src_lang="et_EE", tgt_lang="et_EE")
model = MBartForConditionalGeneration.from_pretrained("IljaSamoilov/MBART-estonian-subtitles")
``` |
cocoshe/gpt2-chinese-gen-ads-by-keywords | cocoshe | 2022-05-11T08:08:23Z | 7 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-11T02:09:28Z | ---
license: apache-2.0
---
[千言—AdvertiseGen广告文案生成数据集](https://www.luge.ai/#/luge/dataDetail?id=9)
> 仅支持.bin(pytorch)
在该千言数据集微调了5个epoch,
```python
input_text = '类型#裙*材质#针织*风格#简约*风格#青春*风格#清新*风格#性感*图案#条纹*图案#撞色*裙下摆#开叉*裙长#连衣裙*裙款式#拼接*裙款式#吊带'
output_text = gen_ads(input_text)
output_text = output_text.replace(' ', '')
output_text = output_text[len(input_text):]
output_text
```
输出(实际中注意控制max_length)
```python
output_text='夏天穿的针织衫,搭配简约上衣+牛仔裙,一下子就活泼起来了好吧,就这么简约的蓝色衬托出女性优雅的气质,搭出一派优雅女人味,让人印象深刻哦~好了,今天是秋天来了,天气凉了,是不是该穿上针织呢,秋天会是一个充满阳光的日子呢?让我们一起去看看今天的穿搭吧!首先是白色风衣,其次是棉质风衣。在秋天我们应该穿丝缎或者花边,这种比较清新的风格一定不会让人觉得很成熟,而且又是简约款式,显得自然、有气质。再就是皮草风衣啦,一件白皮草+一件牛仔+两件棉纱的搭配就很潮'
```
|
huggingtweets/elonmusk-kimkardashian | huggingtweets | 2022-05-11T07:03:54Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-05-11T07:03:46Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521957986335297536/itVSA7l0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446623190252343301/qIJAwo9I_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Kim Kardashian</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-kimkardashian</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Kim Kardashian.
| Data | Elon Musk | Kim Kardashian |
| --- | --- | --- |
| Tweets downloaded | 222 | 3241 |
| Retweets | 16 | 715 |
| Short tweets | 47 | 667 |
| Tweets kept | 159 | 1859 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/17bd0o7t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-kimkardashian's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2g9hft2n) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2g9hft2n/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-kimkardashian')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ceggian/sbert_standard_reddit_softmax | ceggian | 2022-05-11T06:49:38Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-05-11T06:34:19Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 117759 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11775,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
mcurmei/unique_N_max | mcurmei | 2022-05-11T06:19:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-11T05:54:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: unique_N_max
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unique_N_max
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0901 | 1.0 | 1162 | 1.8326 |
| 1.5479 | 2.0 | 2324 | 1.7201 |
| 1.2903 | 3.0 | 3486 | 1.7409 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
SalamaThanks/SalamaThanksTransformer_fil2en_v2 | SalamaThanks | 2022-05-11T05:57:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T05:42:28Z | ---
license: afl-3.0
---
SalamaThanks Transformer for Filipino-to-English Text Translation version 2.
A finetuned model based on the Helsinki-NLP/opus-mt-en-tl transformer model. |
SalamaThanks/SalamaThanksTransformer_fil2en_v1 | SalamaThanks | 2022-05-11T05:45:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T05:34:45Z | ---
license: afl-3.0
---
SalamaThanks Transformer for Filipino-to-English Text Translation version 1.
Based on the Helsinki-NLP/opus-mt-tl-en transformer model. |
SalamaThanks/SalamaThanksTransformer_en2fil_v1 | SalamaThanks | 2022-05-11T05:45:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-11T05:31:27Z | ---
license: afl-3.0
---
SalamaThanks Transformer for English-to-Filipino Text Translation version 1.
Based on the Helsinki-NLP/opus-mt-en-tl transformer model. |
fatPegasus23/TesLunarLander-v2 | fatPegasus23 | 2022-05-11T05:09:29Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T04:55:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 173.71 +/- 111.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
bbhaskar8/PPO-LunarLander-v2 | bbhaskar8 | 2022-05-11T04:32:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T04:31:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 215.32 +/- 46.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
junnyu/roformer_v2_chinese_char_large | junnyu | 2022-05-11T03:32:38Z | 5 | 3 | transformers | [
"transformers",
"pytorch",
"roformer",
"fill-mask",
"roformer-v2",
"tf2.0",
"zh",
"arxiv:2104.09864",
"autotrain_compatible",
"region:us"
]
| fill-mask | 2022-03-21T13:51:14Z | ---
language: zh
tags:
- roformer-v2
- pytorch
- tf2.0
inference: False
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer-v2
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## 评测对比
### CLUE-dev榜单分类任务结果,base+large版本。
| | iflytek | tnews | afqmc | cmnli | ocnli | wsc | csl |
| :-----: | :-----: | :---: | :---: | :---: | :---: | :---: | :---: |
| BERT | 60.06 | 56.80 | 72.41 | 79.56 | 73.93 | 78.62 | 83.93 |
| RoBERTa | 60.64 | 58.06 | 74.05 | 81.24 | 76.00 | 87.50 | 84.50 |
| RoFormer | 60.91 | 57.54 | 73.52 | 80.92 | 76.07 | 86.84 | 84.63 |
| RoFormerV2<sup>*</sup> | 60.87 | 56.54 | 72.75 | 80.34 | 75.36 | 80.92 | 84.67 |
| GAU-α | 61.41 | 57.76 | 74.17 | 81.82 | 75.86 | 79.93 | 85.67 |
| RoFormer-pytorch(本仓库代码) | 60.60 | 57.51 | 74.44 | 80.79 | 75.67 | 86.84 | 84.77 |
| RoFormerV2-pytorch(本仓库代码) | **62.87** | 59.03 | **76.20** | 80.85 | 79.73 | 87.82 | **91.87** |
| GAU-α-pytorch(Adafactor) | 61.18 | 57.52 | 73.42 | 80.91 | 75.69 | 80.59 | 85.5 |
| GAU-α-pytorch(AdamW wd0.01 warmup0.1) | 60.68 | 57.95 | 73.08 | 81.02 | 75.36 | 81.25 | 83.93 |
| RoFormerV2-large-pytorch(本仓库代码) | 61.75 | **59.21** | 76.14 | 82.35 | **81.73** | **91.45** | 91.5 |
| Chinesebert-large-pytorch | 61.25 | 58.67 | 74.70 | **82.65** | 79.63 | 87.83 | 84.97 |
### CLUE-1.0-test榜单分类任务结果,base+large版本。
| | iflytek | tnews | afqmc | cmnli | ocnli | wsc | csl |
| :-----: | :-----: | :---: | :---: | :---: | :---: | :---: | :---: |
| RoFormer-pytorch(本仓库代码) | 59.54 | 57.34 | 74.46 | 80.23 | 73.67 | 80.69 | 84.57 |
| RoFormerV2-pytorch(本仓库代码) | **63.15** | 58.24 | 75.42 | 80.59 | 74.17 | 83.79 | 83.73 |
| GAU-α-pytorch(Adafactor) | 61.38 | 57.08 | 74.05 | 80.37 | 73.53 | 74.83 | **85.6** |
| GAU-α-pytorch(AdamW wd0.01 warmup0.1) | 60.54 | 57.67 | 72.44 | 80.32 | 72.97 | 76.55 | 84.13 |
| RoFormerV2-large-pytorch(本仓库代码) | 61.85 | **59.13** | **76.38** | 80.97 | 76.23 | **85.86** | 84.33 |
| Chinesebert-large-pytorch | 61.54 | 58.57 | 74.8 | **81.94** | **76.93** | 79.66 | 85.1 |
### 注:
- 其中RoFormerV2<sup>*</sup>表示的是未进行多任务学习的RoFormerV2模型,该模型苏神并未开源,感谢苏神的提醒。
- 其中不带有pytorch后缀结果都是从[GAU-alpha](https://github.com/ZhuiyiTechnology/GAU-alpha)仓库复制过来的。
- 其中带有pytorch后缀的结果都是自己训练得出的。
- 苏神代码中拿了cls标签后直接进行了分类,而本仓库使用了如下的分类头,多了2个dropout,1个dense,1个relu激活。
```python
class RoFormerClassificationHead(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
self.config = config
def forward(self, features, **kwargs):
x = features[:, 0, :] # take <s> token (equiv. to [CLS])
x = self.dropout(x)
x = self.dense(x)
x = ACT2FN[self.config.hidden_act](x) # 这里是relu
x = self.dropout(x)
x = self.out_proj(x)
return x
```
### 安装
- pip install roformer==0.4.3
## pytorch & tf2.0使用
```python
import torch
import tensorflow as tf
from transformers import BertTokenizer
from roformer import RoFormerForMaskedLM, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained("junnyu/roformer_v2_chinese_char_large")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_v2_chinese_char_large")
tf_model = TFRoFormerForMaskedLM.from_pretrained(
"junnyu/roformer_v2_chinese_char_base", from_pt=True
)
pt_inputs = tokenizer(text, return_tensors="pt")
tf_inputs = tokenizer(text, return_tensors="tf")
# pytorch
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)
)
print(pt_outputs_sentence)
# tf
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)
)
print(tf_outputs_sentence)
# small
# pytorch: 今天[的||,||是||很||也]很好,我[要||会||是||想||在]去公园玩。
# tf: 今天[的||,||是||很||也]很好,我[要||会||是||想||在]去公园玩。
# base
# pytorch: 今天[我||天||晴||园||玩]很好,我[想||要||会||就||带]去公园玩。
# tf: 今天[我||天||晴||园||玩]很好,我[想||要||会||就||带]去公园玩。
# large
# pytorch: 今天[天||气||我||空||阳]很好,我[又||想||会||就||爱]去公园玩。
# tf: 今天[天||气||我||空||阳]很好,我[又||想||会||就||爱]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```tex
@techreport{roformerv2,
title={RoFormerV2: A Faster and Better RoFormer - ZhuiyiAI},
author={Jianlin Su, Shengfeng Pan, Bo Wen, Yunfeng Liu},
year={2022},
url="https://github.com/ZhuiyiTechnology/roformer-v2",
}
``` |
junnyu/chinese_GAU-alpha-char_L-24_H-768 | junnyu | 2022-05-11T03:29:46Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gau_alpha",
"fill-mask",
"gau alpha",
"torch",
"zh",
"autotrain_compatible",
"region:us"
]
| fill-mask | 2022-04-22T08:03:14Z | ---
language: zh
tags:
- gau alpha
- torch
inference: False
---
# pytorch 代码
https://github.com/JunnYu/GAU-alpha-pytorch
# bert4keras代码
https://github.com/ZhuiyiTechnology/GAU-alpha
# Install
```bash
pip install git+https://github.com/JunnYu/GAU-alpha-pytorch.git
or
pip install gau_alpha
```
## 评测对比
### CLUE-dev榜单分类任务结果,base版本。
| | iflytek | tnews | afqmc | cmnli | ocnli | wsc | csl |
| :-----: | :-----: | :---: | :---: | :---: | :---: | :---: | :---: |
| BERT | 60.06 | 56.80 | 72.41 | 79.56 | 73.93 | 78.62 | 83.93 |
| RoBERTa | 60.64 | 58.06 | 74.05 | **81.24** | 76.00 | 87.50 | 84.50 |
| RoFormer | 60.91 | 57.54 | 73.52 | 80.92 | 76.07 | 86.84 | 84.63 |
| RoFormerV2<sup>*</sup> | 60.87 | 56.54 | 72.75 | 80.34 | 75.36 | 80.92 | 84.67 |
| GAU-α | 61.41 | 57.76 | 74.17 | 81.82 | 75.86 | 79.93 | 85.67 |
| RoFormerV2-pytorch| **62.87** | **59.03** | **76.20** | 80.85 | **79.73** | **87.82** | **91.87** |
| GAU-α-pytorch(Adafactor) | 61.18 | 57.52 | 73.42 | 80.91 | 75.69 | 80.59 | 85.5 |
| GAU-α-pytorch(AdamW wd0.01 warmup0.1) | 60.68 | 57.95 | 73.08 | 81.02 | 75.36 | 81.25 | 83.93 |
### CLUE-test榜单分类任务结果,base版本。
| | iflytek | tnews | afqmc | cmnli | ocnli | wsc | csl |
| :-----: | :-----: | :---: | :---: | :---: | :---: | :---: | :---: |
| RoFormerV2-pytorch | **63.15** | **58.24** | **75.42** | **80.59** | **74.17** | **83.79** | 83.73 |
| GAU-α-pytorch(Adafactor) | 61.38 | 57.08 | 74.05 | 80.37 | 73.53 | 74.83 | **85.6** |
| GAU-α-pytorch(AdamW wd0.01 warmup0.1) | 60.54 | 57.67 | 72.44 | 80.32 | 72.97 | 76.55 | 84.13 |
### CLUE-dev集榜单阅读理解和NER结果
| | cmrc2018 | c3 | chid | cluener |
| :-----: | :-----: | :---: | :---: | :---: |
| BERT | 56.17 | 60.54 | 85.69 | 79.45 |
| RoBERTa | 56.54 | 67.66 | 86.71 | 79.47 |
| RoFormer | 56.26 | 67.24 | 86.57 | 79.72 |
| RoFormerV2<sup>*</sup> | 57.91 | 64.62 | 85.09 | **81.08** |
| GAU-α | **58.09** | **68.24** | **87.91** | 80.01 |
### 注:
- 其中RoFormerV2<sup>*</sup>表示的是未进行多任务学习的RoFormerV2模型,该模型苏神并未开源,感谢苏神的提醒。
- 其中不带有pytorch后缀结果都是从[GAU-alpha](https://github.com/ZhuiyiTechnology/GAU-alpha)仓库复制过来的。
- 其中带有pytorch后缀的结果都是自己训练得出的。
# Usage
```python
import torch
from gau_alpha import GAUAlphaForMaskedLM, GAUAlphaTokenizer
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = GAUAlphaTokenizer.from_pretrained(
"junnyu/chinese_GAU-alpha-char_L-24_H-768"
)
pt_model = GAUAlphaForMaskedLM.from_pretrained(
"junnyu/chinese_GAU-alpha-char_L-24_H-768"
)
pt_inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
val, idx = pt_outputs[i].softmax(-1).topk(k=5)
tokens = tokenizer.convert_ids_to_tokens(idx)
new_tokens = []
for v, t in zip(val.cpu(), tokens):
new_tokens.append(f"{t}+{round(v.item(),4)}")
pt_outputs_sentence += "[" + "||".join(new_tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)
)
print(pt_outputs_sentence)
# pytorch: 今天[天+0.8657||气+0.0535||阳+0.0165||,+0.0126||晴+0.0111]很好,我[要+0.4619||想+0.4352||又+0.0252||就+0.0157||跑+0.0064]去公园玩。
```
# Reference
Bibtex:
```tex
@techreport{gau-alpha,
title={GAU-α: GAU-based Transformers for NLP - ZhuiyiAI},
author={Jianlin Su, Shengfeng Pan, Bo Wen, Yunfeng Liu},
year={2022},
url="https://github.com/ZhuiyiTechnology/GAU-alpha",
}
```
|
jonporterjones/TEST2ppo-LunarLander-v2 | jonporterjones | 2022-05-11T03:10:57Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T02:51:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 105.84 +/- 83.18
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_6 | husnu | 2022-05-11T01:56:24Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-05-10T17:38:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_6
This model is a fine-tuned version of [husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5](https://huggingface.co/husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3646
- Wer: 0.3478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1024 | 0.51 | 400 | 0.4030 | 0.4171 |
| 0.1533 | 1.02 | 800 | 0.4733 | 0.4570 |
| 0.1584 | 1.53 | 1200 | 0.4150 | 0.4371 |
| 0.1538 | 2.04 | 1600 | 0.4104 | 0.4390 |
| 0.1395 | 2.55 | 2000 | 0.3891 | 0.4133 |
| 0.1415 | 3.07 | 2400 | 0.3877 | 0.4015 |
| 0.1261 | 3.58 | 2800 | 0.3685 | 0.3899 |
| 0.1149 | 4.09 | 3200 | 0.3791 | 0.3881 |
| 0.1003 | 4.6 | 3600 | 0.3642 | 0.3626 |
| 0.0934 | 5.11 | 4000 | 0.3755 | 0.3516 |
| 0.0805 | 5.62 | 4400 | 0.3646 | 0.3478 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
koala978/PPO-LunarLander-v2 | koala978 | 2022-05-11T01:25:09Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-11T01:24:41Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 249.06 +/- 18.91
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huxxx657/roberta-base-finetuned-scrambled-squad-10-new | huxxx657 | 2022-05-11T00:56:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-10T22:49:53Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-scrambled-squad-10-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-scrambled-squad-10-new
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9984 | 1.0 | 5536 | 0.9721 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.0
- Tokenizers 0.12.1
|
chris-kehl/TEST2ppo-LunarLander-v2 | chris-kehl | 2022-05-11T00:41:54Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-08T01:21:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 284.84 +/- 20.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
madatnlp/mt5-kormath | madatnlp | 2022-05-11T00:26:19Z | 4 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T22:55:04Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: madatnlp/mt5-kormath
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# madatnlp/mt5-kormath
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7119
- Validation Loss: 1.1299
- Epoch: 61
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_bfloat16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 17.9929 | 5.9287 | 0 |
| 5.4802 | 3.9942 | 1 |
| 4.1718 | 3.2517 | 2 |
| 3.5750 | 2.9586 | 3 |
| 3.1535 | 2.4970 | 4 |
| 2.8665 | 2.4626 | 5 |
| 2.6682 | 2.3795 | 6 |
| 2.5323 | 2.2238 | 7 |
| 2.4057 | 2.0684 | 8 |
| 2.3107 | 2.2033 | 9 |
| 2.2501 | 1.8339 | 10 |
| 2.1089 | 1.9064 | 11 |
| 2.0741 | 2.0256 | 12 |
| 1.9868 | 1.8107 | 13 |
| 1.9719 | 1.7157 | 14 |
| 1.8762 | 1.6966 | 15 |
| 1.8814 | 1.6580 | 16 |
| 1.8052 | 1.6043 | 17 |
| 1.7567 | 1.6572 | 18 |
| 1.7209 | 1.5485 | 19 |
| 1.7347 | 1.6464 | 20 |
| 1.6760 | 1.5892 | 21 |
| 1.6286 | 1.5765 | 22 |
| 1.6124 | 1.7408 | 23 |
| 1.5683 | 1.4875 | 24 |
| 1.5814 | 1.4448 | 25 |
| 1.5306 | 1.4902 | 26 |
| 1.5121 | 1.5133 | 27 |
| 1.4869 | 1.4217 | 28 |
| 1.4539 | 1.5602 | 29 |
| 1.4650 | 1.4699 | 30 |
| 1.4508 | 1.4319 | 31 |
| 1.3910 | 1.5975 | 32 |
| 1.3758 | 1.4031 | 33 |
| 1.3550 | 1.4295 | 34 |
| 1.3405 | 1.3804 | 35 |
| 1.3144 | 1.4202 | 36 |
| 1.3136 | 1.5135 | 37 |
| 1.2617 | 1.4790 | 38 |
| 1.2260 | 1.4108 | 39 |
| 1.2348 | 1.3108 | 40 |
| 1.2019 | 1.1461 | 41 |
| 1.1775 | 1.2509 | 42 |
| 1.1690 | 1.2179 | 43 |
| 1.1318 | 1.2483 | 44 |
| 1.1013 | 1.0815 | 45 |
| 1.0735 | 1.2135 | 46 |
| 1.0439 | 1.1260 | 47 |
| 1.0182 | 1.1993 | 48 |
| 0.9971 | 1.0797 | 49 |
| 0.9583 | 1.2587 | 50 |
| 0.9505 | 1.0793 | 51 |
| 0.9366 | 1.0501 | 52 |
| 0.9170 | 1.1476 | 53 |
| 0.8741 | 1.0560 | 54 |
| 0.8558 | 1.0024 | 55 |
| 0.8394 | 0.9604 | 56 |
| 0.8203 | 1.2700 | 57 |
| 0.7938 | 1.1081 | 58 |
| 0.7800 | 1.0198 | 59 |
| 0.7378 | 1.1748 | 60 |
| 0.7119 | 1.1299 | 61 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.2.0
- Tokenizers 0.12.1
|
fastai/blurr_IMDB_bert_classification | fastai | 2022-05-10T22:02:09Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2022-05-10T22:01:53Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
mustapha/Lunar_lander_v2_gym | mustapha | 2022-05-10T21:54:55Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T21:54:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 211.89 +/- 53.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
fminichev/TEST2ppo-LunarLander-v2 | fminichev | 2022-05-10T21:41:02Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T21:40:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 159.15 +/- 61.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
nkt32/ppo-LunarLander-v2 | nkt32 | 2022-05-10T21:31:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T20:15:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 265.99 +/- 20.58
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huggingtweets/vsshole | huggingtweets | 2022-05-10T21:24:12Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/vsshole/1652217847985/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1475160033826586625/ZGf3YqfN_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">🌺 m ny 🐝🐙</div>
<div style="text-align: center; font-size: 14px;">@vsshole</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 🌺 m ny 🐝🐙.
| Data | 🌺 m ny 🐝🐙 |
| --- | --- |
| Tweets downloaded | 3221 |
| Retweets | 382 |
| Short tweets | 1727 |
| Tweets kept | 1112 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f393wuv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @vsshole's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29sa4yhp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29sa4yhp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/vsshole')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
garybake/TEST2ppo-LunarLander-v2 | garybake | 2022-05-10T21:23:16Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-04T20:00:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 273.85 +/- 20.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huxxx657/roberta-base-finetuned-scrambled-squad-15 | huxxx657 | 2022-05-10T21:13:58Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-10T19:13:39Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-scrambled-squad-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-scrambled-squad-15
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8944 | 1.0 | 5590 | 1.8722 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
FollishBoi/ppo-LunarLander-v2-try5 | FollishBoi | 2022-05-10T20:49:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T20:49:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 289.86 +/- 15.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
tjscollins/ppo-LunarLander-v2 | tjscollins | 2022-05-10T20:45:37Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T20:45:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 287.12 +/- 20.40
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
enoriega/kw_pubmed_1000_0.0003 | enoriega | 2022-05-10T20:10:43Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:keyword_pubmed_dataset",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-05-10T19:37:10Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- keyword_pubmed_dataset
metrics:
- accuracy
model-index:
- name: kw_pubmed_1000_0.0003
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: keyword_pubmed_dataset
type: keyword_pubmed_dataset
args: sentence
metrics:
- name: Accuracy
type: accuracy
value: 0.33938523162661094
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kw_pubmed_1000_0.0003
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the keyword_pubmed_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7086
- Accuracy: 0.3394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 250
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.09 | 4 | 4.3723 | 0.3436 |
| 6.0386 | 0.17 | 8 | 4.2113 | 0.3442 |
| 3.7573 | 0.26 | 12 | 4.2079 | 0.3634 |
| 2.9944 | 0.35 | 16 | 4.3370 | 0.3513 |
| 2.7048 | 0.44 | 20 | 4.8594 | 0.3067 |
| 2.7048 | 0.52 | 24 | 4.4929 | 0.3383 |
| 2.9458 | 0.61 | 28 | 4.5146 | 0.3408 |
| 2.3783 | 0.7 | 32 | 4.5680 | 0.3430 |
| 2.2485 | 0.78 | 36 | 4.5095 | 0.3477 |
| 2.1701 | 0.87 | 40 | 4.4971 | 0.3449 |
| 2.1701 | 0.96 | 44 | 4.7051 | 0.3321 |
| 2.0861 | 1.07 | 48 | 4.7615 | 0.3310 |
| 2.4168 | 1.15 | 52 | 4.7086 | 0.3394 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
kosta-naumenko/ppo-LunarLander-v2-2 | kosta-naumenko | 2022-05-10T20:06:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T20:06:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 228.05 +/- 22.63
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
vanichandna/bert-base-multilingual-cased-finetuned-squadv1 | vanichandna | 2022-05-10T19:47:22Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-10T13:14:15Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vanichandna/bert-base-multilingual-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vanichandna/bert-base-multilingual-cased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5313
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 43880, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2336 | 0 |
| 0.8301 | 1 |
| 0.6456 | 2 |
| 0.5313 | 3 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jxuhf/Fine-tuning-text-classification-model-Habana-Gaudi | jxuhf | 2022-05-10T19:39:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"optimum_habana",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-09T20:30:51Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8823529411764706
- name: F1
type: f1
value: 0.9180887372013652
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking](https://huggingface.co/bert-large-uncased-whole-word-masking) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3680
- Accuracy: 0.8824
- F1: 0.9181
- Combined Score: 0.9002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0a0+gitfe03f8c
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jadermcs/ppo-lunar-lander | jadermcs | 2022-05-10T19:27:33Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T19:27:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: mlp
results:
- metrics:
- type: mean_reward
value: 274.83 +/- 24.24
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **mlp** Agent playing **LunarLander-v2**
This is a trained model of a **mlp** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
huxxx657/roberta-base-finetuned-scrambled-squad-10 | huxxx657 | 2022-05-10T19:05:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-05-10T17:05:40Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-scrambled-squad-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-scrambled-squad-10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7482 | 1.0 | 5532 | 1.7200 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Extred/TEST2ppo-LunarLander-v2-CustomMLPNet | Extred | 2022-05-10T19:03:32Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T19:03:07Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 238.37 +/- 65.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Extred/TEST2ppo-LunarLander-v2-MlpLnLstmPolicy | Extred | 2022-05-10T19:02:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T18:17:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 203.89 +/- 88.13
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
KenP/mt5-small-finetuned-amazon-en-es | KenP | 2022-05-10T18:22:44Z | 3 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-05-10T17:31:10Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KenP/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KenP/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.0378
- Validation Loss: 3.3712
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.9112 | 4.3131 | 0 |
| 5.8947 | 3.7701 | 1 |
| 5.1149 | 3.5826 | 2 |
| 4.6940 | 3.5080 | 3 |
| 4.4064 | 3.4388 | 4 |
| 4.2301 | 3.4012 | 5 |
| 4.1037 | 3.3755 | 6 |
| 4.0378 | 3.3712 | 7 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Tanchik/TESTppo-LunarLander-v2 | Tanchik | 2022-05-10T18:20:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-10T18:20:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 280.75 +/- 20.87
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
AndrewK/ppo-LunarLander-v2 | AndrewK | 2022-05-10T18:05:42Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-05-06T16:42:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 270.00 +/- 16.69
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
Xuandong/HPD-TinyBERT-F128 | Xuandong | 2022-05-10T17:55:05Z | 33 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2203.07687",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-05-10T17:04:19Z |
---
license: apache-2.0
---
# HPD-TinyBERT-F128
This repository contains the pre-trained models for our paper [Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation](https://arxiv.org/abs/2203.07687). The sentence embedding model contains only 14M parameters and the model size is only 55MB.
## Overview
We propose **H**omomorphic **P**rojective **D**istillation (HPD) to learn compressed sentence embeddings. Our method augments a small Transformer encoder model with learnable projection layers to produce compact representations while mimicking a large pre-trained language model to retain the sentence representation quality.
## Details
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The teacher model is [`princeton-nlp/sup-simcse-roberta-large`](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) and the student model is [`nreimers/TinyBERT_L-4_H-312_v2`](https://huggingface.co/nreimers/TinyBERT_L-4_H-312_v2).
## Usage
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
After installing the package, you can simply load our model
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Xuandong/HPD-TinyBERT-F128')
```
Then you can use our model for **encoding sentences into embeddings**
```python
sentences = ['He plays guitar.', 'A street vendor is outside.']
sentence_embeddings = model.encode(sentences)
for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("")
```
## Evaluation Results
We evaluate our model on semantic textual similarity (STS) tasks. The results are:
| STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. |
|-------|-------|-------|-------|-------|--------------|-----------------|-------|
| 74.29 | 83.05 | 78.80 | 84.62 | 81.17 | 84.36 | 80.83 | 81.02 |
## Training
Please refer to the github repo (https://github.com/XuandongZhao/HPD) for the details about the training.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 312, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 312, 'out_features': 128, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citation
Please cite our paper if you use HPD in your work:
```bibtex
@article{zhao2022compressing,
title={Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation},
author={Zhao, Xuandong and Yu, Zhiguo and Wu, Ming and Li, Lei},
journal={arXiv preprint arXiv:2203.07687},
year={2022}
}
``` |
Xuandong/HPD-MiniLM-F128 | Xuandong | 2022-05-10T17:54:43Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2203.07687",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-05-10T17:01:40Z | ---
license: apache-2.0
---
# HPD-MiniLM-F128
This repository contains the pre-trained models for our paper [Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation](https://arxiv.org/abs/2203.07687). The sentence embedding model contains only 23M parameters and the model size is only 87MB.
## Overview
We propose **H**omomorphic **P**rojective **D**istillation (HPD) to learn compressed sentence embeddings. Our method augments a small Transformer encoder model with learnable projection layers to produce compact representations while mimicking a large pre-trained language model to retain the sentence representation quality.
## Details
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The teacher model is [`princeton-nlp/sup-simcse-roberta-large`](https://huggingface.co/princeton-nlp/sup-simcse-bert-base-uncased) and the student model is [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased).
## Usage
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
After installing the package, you can simply load our model
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('Xuandong/HPD-MiniLM-F128')
```
Then you can use our model for **encoding sentences into embeddings**
```python
sentences = ['He plays guitar.', 'A street vendor is outside.']
sentence_embeddings = model.encode(sentences)
for sentence, embedding in zip(sentences, sentence_embeddings):
print("Sentence:", sentence)
print("Embedding:", embedding)
print("")
```
## Evaluation Results
We evaluate our model on semantic textual similarity (STS) tasks. The results are:
| STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | Avg. |
|-------|-------|-------|-------|-------|--------------|-----------------|-------|
| 74.94 | 84.52 | 80.25 | 84.87 | 81.90 | 84.98 | 81.15 | 81.80 |
## Training
Please refer to the github repo (https://github.com/XuandongZhao/HPD) for the details about the training.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 384, 'out_features': 128, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citation
Please cite our paper if you use HPD in your work:
```bibtex
@article{zhao2022compressing,
title={Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation},
author={Zhao, Xuandong and Yu, Zhiguo and Wu, Ming and Li, Lei},
journal={arXiv preprint arXiv:2203.07687},
year={2022}
}
``` |
allenai/multicite-multilabel-scibert | allenai | 2022-05-10T17:45:24Z | 123 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"scibert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-06T12:02:26Z | ---
language: en
tags:
- scibert
license: mit
---
# MultiCite: Multi-label Citation Intent Classification with SciBERT (NAACL 2022)
This model has been trained on the data available here: https://github.com/allenai/multicite |
husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5 | husnu | 2022-05-10T17:22:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-05-10T13:23:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5
This model is a fine-tuned version of [husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4](https://huggingface.co/husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3439
- Wer: 0.3634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1243 | 0.51 | 400 | 0.4312 | 0.4202 |
| 0.1956 | 1.02 | 800 | 0.4421 | 0.4498 |
| 0.1816 | 1.53 | 1200 | 0.4012 | 0.4285 |
| 0.1548 | 2.04 | 1600 | 0.3720 | 0.3845 |
| 0.1171 | 2.55 | 2000 | 0.3439 | 0.3634 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.10.3
|
Subsets and Splits