modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-14 00:44:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 519
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-14 00:44:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
carbon225/canine-s-wordseg-en | carbon225 | 2022-09-23T23:42:11Z | 98 | 1 | transformers | [
"transformers",
"pytorch",
"canine",
"token-classification",
"en",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-22T03:58:10Z | ---
license: cc0-1.0
language: en
widget:
- text: "thismodelcanperformwordsegmentation"
- text: "sometimesitdoesntworkquitewell"
- text: "expertsexchange"
---
|
ericntay/stbl_clinical_bert_ft_rs5 | ericntay | 2022-09-23T20:39:56Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-23T20:21:55Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs5
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0936
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2723 | 1.0 | 101 | 0.0875 | 0.8479 |
| 0.066 | 2.0 | 202 | 0.0688 | 0.9002 |
| 0.0328 | 3.0 | 303 | 0.0668 | 0.9070 |
| 0.0179 | 4.0 | 404 | 0.0689 | 0.9129 |
| 0.0098 | 5.0 | 505 | 0.0790 | 0.9147 |
| 0.0069 | 6.0 | 606 | 0.0805 | 0.9205 |
| 0.0033 | 7.0 | 707 | 0.0835 | 0.9268 |
| 0.0022 | 8.0 | 808 | 0.0904 | 0.9262 |
| 0.0021 | 9.0 | 909 | 0.0882 | 0.9263 |
| 0.0015 | 10.0 | 1010 | 0.0933 | 0.9289 |
| 0.0009 | 11.0 | 1111 | 0.0921 | 0.9311 |
| 0.0009 | 12.0 | 1212 | 0.0936 | 0.9268 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
g30rv17ys/ddpm-geeve-drusen-1000-200ep | g30rv17ys | 2022-09-23T19:12:36Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:39:11Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-drusen-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-drusen-1000-200ep/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-cnv-1000-200ep | g30rv17ys | 2022-09-23T19:10:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:29:54Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1000-200ep/tensorboard?#scalars)
|
gokuls/distilbert-base-Massive-intent | gokuls | 2022-09-23T19:02:42Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T18:50:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: distilbert-base-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8947368421052632
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-Massive-intent
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7693
- Accuracy: 0.8947
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4555 | 1.0 | 720 | 0.5983 | 0.8426 |
| 0.407 | 2.0 | 1440 | 0.4702 | 0.8775 |
| 0.2095 | 3.0 | 2160 | 0.5319 | 0.8834 |
| 0.1172 | 4.0 | 2880 | 0.5902 | 0.8810 |
| 0.0683 | 5.0 | 3600 | 0.6555 | 0.8810 |
| 0.042 | 6.0 | 4320 | 0.6989 | 0.8879 |
| 0.0253 | 7.0 | 5040 | 0.6963 | 0.8928 |
| 0.0208 | 8.0 | 5760 | 0.7313 | 0.8908 |
| 0.0119 | 9.0 | 6480 | 0.7683 | 0.8923 |
| 0.0093 | 10.0 | 7200 | 0.7693 | 0.8947 |
| 0.0071 | 11.0 | 7920 | 0.7873 | 0.8923 |
| 0.0047 | 12.0 | 8640 | 0.8275 | 0.8893 |
| 0.003 | 13.0 | 9360 | 0.8312 | 0.8928 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
tszocinski/bart-base-squad-question-generation | tszocinski | 2022-09-23T18:43:43Z | 75 | 0 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-22T19:36:46Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tszocinski/bart-base-squad-question-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tszocinski/bart-base-squad-question-generation
This model is a fine-tuned version of [tszocinski/bart-base-squad-question-generation](https://huggingface.co/tszocinski/bart-base-squad-question-generation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.5656
- Validation Loss: 11.1958
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'RMSprop', 'config': {'name': 'RMSprop', 'learning_rate': 0.001, 'decay': 0.0, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.5656 | 11.1958 | 0 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
g30rv17ys/ddpm-geeve-normal-1000-200ep | g30rv17ys | 2022-09-23T18:24:23Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-23T15:24:37Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-normal-1000-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-normal-1000-200ep/tensorboard?#scalars)
|
mfreihaut/iab_classification-finetuned-mnli-finetuned-mnli | mfreihaut | 2022-09-23T18:20:23Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"bart",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-21T18:05:28Z | ---
tags:
- generated_from_trainer
model-index:
- name: iab_classification-finetuned-mnli-finetuned-mnli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iab_classification-finetuned-mnli-finetuned-mnli
This model is a fine-tuned version of [mfreihaut/iab_classification-finetuned-mnli-finetuned-mnli](https://huggingface.co/mfreihaut/iab_classification-finetuned-mnli-finetuned-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 250 | 1.5956 |
| 0.9361 | 2.0 | 500 | 0.0409 |
| 0.9361 | 3.0 | 750 | 2.9853 |
| 0.7634 | 4.0 | 1000 | 0.1317 |
| 0.7634 | 5.0 | 1250 | 0.4056 |
| 0.611 | 6.0 | 1500 | 1.8038 |
| 0.611 | 7.0 | 1750 | 0.6305 |
| 0.5627 | 8.0 | 2000 | 0.6923 |
| 0.5627 | 9.0 | 2250 | 3.7410 |
| 0.9863 | 10.0 | 2500 | 2.1912 |
| 0.9863 | 11.0 | 2750 | 1.5405 |
| 1.0197 | 12.0 | 3000 | 1.9271 |
| 1.0197 | 13.0 | 3250 | 1.1741 |
| 0.5186 | 14.0 | 3500 | 1.1864 |
| 0.5186 | 15.0 | 3750 | 0.7945 |
| 0.4042 | 16.0 | 4000 | 1.0645 |
| 0.4042 | 17.0 | 4250 | 1.8826 |
| 0.3637 | 18.0 | 4500 | 0.3234 |
| 0.3637 | 19.0 | 4750 | 0.2641 |
| 0.3464 | 20.0 | 5000 | 0.8596 |
| 0.3464 | 21.0 | 5250 | 0.5601 |
| 0.2449 | 22.0 | 5500 | 0.4543 |
| 0.2449 | 23.0 | 5750 | 1.1986 |
| 0.2595 | 24.0 | 6000 | 0.3642 |
| 0.2595 | 25.0 | 6250 | 1.3606 |
| 0.298 | 26.0 | 6500 | 0.8154 |
| 0.298 | 27.0 | 6750 | 1.1105 |
| 0.1815 | 28.0 | 7000 | 0.7443 |
| 0.1815 | 29.0 | 7250 | 0.2616 |
| 0.165 | 30.0 | 7500 | 0.5318 |
| 0.165 | 31.0 | 7750 | 0.7608 |
| 0.1435 | 32.0 | 8000 | 0.9647 |
| 0.1435 | 33.0 | 8250 | 1.3749 |
| 0.1516 | 34.0 | 8500 | 0.7167 |
| 0.1516 | 35.0 | 8750 | 0.5426 |
| 0.1359 | 36.0 | 9000 | 0.7225 |
| 0.1359 | 37.0 | 9250 | 0.5453 |
| 0.1266 | 38.0 | 9500 | 0.4825 |
| 0.1266 | 39.0 | 9750 | 0.7271 |
| 0.1153 | 40.0 | 10000 | 0.9044 |
| 0.1153 | 41.0 | 10250 | 1.0363 |
| 0.1175 | 42.0 | 10500 | 0.7987 |
| 0.1175 | 43.0 | 10750 | 0.7596 |
| 0.1089 | 44.0 | 11000 | 0.8637 |
| 0.1089 | 45.0 | 11250 | 0.8327 |
| 0.1092 | 46.0 | 11500 | 0.7161 |
| 0.1092 | 47.0 | 11750 | 0.7768 |
| 0.1068 | 48.0 | 12000 | 0.9059 |
| 0.1068 | 49.0 | 12250 | 0.8829 |
| 0.1045 | 50.0 | 12500 | 0.8711 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
nkkodelacruz/distilbert-base-uncased-finetuned-cola | nkkodelacruz | 2022-09-23T16:17:52Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T09:07:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5595884617444483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7903
- Matthews Correlation: 0.5596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5224 | 1.0 | 535 | 0.5373 | 0.3974 |
| 0.3503 | 2.0 | 1070 | 0.5142 | 0.4942 |
| 0.2328 | 3.0 | 1605 | 0.5449 | 0.5449 |
| 0.1775 | 4.0 | 2140 | 0.7457 | 0.5487 |
| 0.1235 | 5.0 | 2675 | 0.7903 | 0.5596 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
gokuls/distilroberta-base-Massive-intent | gokuls | 2022-09-23T15:34:27Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T15:23:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- massive
metrics:
- accuracy
model-index:
- name: distilroberta-base-Massive-intent
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: massive
type: massive
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.8937530742744713
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-Massive-intent
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the massive dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6618
- Accuracy: 0.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 33
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.41 | 1.0 | 720 | 0.6742 | 0.8288 |
| 0.4978 | 2.0 | 1440 | 0.5150 | 0.8751 |
| 0.3009 | 3.0 | 2160 | 0.5705 | 0.8790 |
| 0.1953 | 4.0 | 2880 | 0.5887 | 0.8795 |
| 0.127 | 5.0 | 3600 | 0.6123 | 0.8810 |
| 0.0914 | 6.0 | 4320 | 0.6575 | 0.8834 |
| 0.0583 | 7.0 | 5040 | 0.6618 | 0.8938 |
| 0.0355 | 8.0 | 5760 | 0.7591 | 0.8864 |
| 0.0259 | 9.0 | 6480 | 0.8087 | 0.8780 |
| 0.02 | 10.0 | 7200 | 0.7964 | 0.8888 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Eulering/moonlight-night | Eulering | 2022-09-23T14:47:20Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2022-09-23T14:47:20Z | ---
license: bigscience-openrail-m
---
|
bhumikak/resultsb | bhumikak | 2022-09-23T14:21:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-23T13:46:43Z | ---
tags:
- generated_from_trainer
model-index:
- name: resultsb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resultsb
This model is a fine-tuned version of [bhumikak/resultsa](https://huggingface.co/bhumikak/resultsa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8957
- Rouge2 Precision: 0.2127
- Rouge2 Recall: 0.2605
- Rouge2 Fmeasure: 0.2167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 50
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Yousef-Cot/distilbert-base-uncased-finetuned-emotion | Yousef-Cot | 2022-09-23T13:21:28Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T07:18:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9218038766645168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8242 | 1.0 | 250 | 0.3311 | 0.8965 | 0.8931 |
| 0.254 | 2.0 | 500 | 0.2201 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.4.0
- Tokenizers 0.11.6
|
Vmuaddib/autotrain-gudel-department-classifier-clean-886428460 | Vmuaddib | 2022-09-23T13:07:21Z | 132 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain",
"de",
"dataset:Vmuaddib/autotrain-data-gudel-department-classifier-clean",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-19T19:51:20Z | ---
tags: [autotrain]
language: de
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Vmuaddib/autotrain-data-gudel-department-classifier-clean
co2_eq_emissions: 14.294320632050567
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 886428460
- CO2 Emissions (in grams): 14.294320632050567
## Validation Metrics
- Loss: 0.051413487643003464
- Accuracy: 0.9894490035169988
- Precision: 1.0
- Recall: 0.9862174578866769
- AUC: 0.9989318529862175
- F1: 0.9930609097918273
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Vmuaddib/autotrain-gudel-department-classifier-clean-886428460
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Vmuaddib/autotrain-gudel-department-classifier-clean-886428460", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Vmuaddib/autotrain-gudel-department-classifier-clean-886428460", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
huggingtweets/cushbomb | huggingtweets | 2022-09-23T12:40:19Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/cushbomb/1663936814713/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1560517790900969473/MPbfc6w2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">matt christman</div>
<div style="text-align: center; font-size: 14px;">@cushbomb</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from matt christman.
| Data | matt christman |
| --- | --- |
| Tweets downloaded | 3230 |
| Retweets | 241 |
| Short tweets | 685 |
| Tweets kept | 2304 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39bxpmve/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cushbomb's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gd8zqob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gd8zqob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cushbomb')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tkuye/reinforce-dd | tkuye | 2022-09-23T10:57:05Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-23T09:49:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: reinforce-dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reinforce-dd
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5375 | 1.35 | 500 | 0.0017 |
| 0.0001 | 2.7 | 1000 | 0.0000 |
| 0.0 | 4.05 | 1500 | 0.0000 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
rinascimento/distilbert-base-uncased-finetuned-emotion | rinascimento | 2022-09-23T09:52:40Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-23T06:15:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9241401774459951
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.815 | 1.0 | 250 | 0.3051 | 0.9045 | 0.9022 |
| 0.2496 | 2.0 | 500 | 0.2167 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
bryanleeharyanto/vtt-indonesia | bryanleeharyanto | 2022-09-23T06:39:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-20T07:59:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: vtt-indonesia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vtt-indonesia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3472
- Wer: 0.3582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7612 | 3.23 | 400 | 0.6405 | 0.6714 |
| 0.4143 | 6.45 | 800 | 0.3772 | 0.4974 |
| 0.2068 | 9.68 | 1200 | 0.3877 | 0.4442 |
| 0.1436 | 12.9 | 1600 | 0.3785 | 0.4212 |
| 0.1133 | 16.13 | 2000 | 0.3944 | 0.4144 |
| 0.09 | 19.35 | 2400 | 0.3695 | 0.3925 |
| 0.0705 | 22.58 | 2800 | 0.3706 | 0.3846 |
| 0.057 | 25.81 | 3200 | 0.3720 | 0.3725 |
| 0.048 | 29.03 | 3600 | 0.3472 | 0.3582 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
apapiu/diffusion_model_aesthetic_keras | apapiu | 2022-09-23T03:56:11Z | 0 | 1 | null | [
"license:openrail",
"region:us"
]
| null | 2022-09-21T19:14:31Z | ---
license: openrail
---
A sample from the [Laion 6.5+ ](https://laion.ai/blog/laion-aesthetics/) image + text dataset. You can see
some samples [here](http://captions.christoph-schuhmann.de/2B-en-6.5.html).
The samples are resized + center-cropped to 64x64x3 and the .npz file also contains CLIP embeddings.
TODO: add img2dataset script.
The data can be used to train a basic text-to-image model.
|
gary109/ai-light-dance_singing5_ft_wav2vec2-large-xlsr-53-5gram-v4-2-1 | gary109 | 2022-09-23T03:39:36Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-21T14:38:44Z | ---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing5_ft_wav2vec2-large-xlsr-53-5gram-v4-2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing5_ft_wav2vec2-large-xlsr-53-5gram-v4-2-1
This model is a fine-tuned version of [gary109/ai-light-dance_singing4_ft_wav2vec2-large-xlsr-53-5gram-v4-2-1](https://huggingface.co/gary109/ai-light-dance_singing4_ft_wav2vec2-large-xlsr-53-5gram-v4-2-1) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1732
- Wer: 0.0831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4351 | 1.0 | 100 | 0.1948 | 0.0903 |
| 0.4381 | 2.0 | 200 | 0.1961 | 0.0930 |
| 0.441 | 3.0 | 300 | 0.1948 | 0.0957 |
| 0.453 | 4.0 | 400 | 0.1971 | 0.0905 |
| 0.4324 | 5.0 | 500 | 0.1823 | 0.0879 |
| 0.4561 | 6.0 | 600 | 0.1934 | 0.0893 |
| 0.4231 | 7.0 | 700 | 0.2088 | 0.0977 |
| 0.4339 | 8.0 | 800 | 0.1924 | 0.0856 |
| 0.4195 | 9.0 | 900 | 0.1835 | 0.0846 |
| 0.4162 | 10.0 | 1000 | 0.1869 | 0.0908 |
| 0.411 | 11.0 | 1100 | 0.1966 | 0.0950 |
| 0.4034 | 12.0 | 1200 | 0.1890 | 0.0879 |
| 0.4155 | 13.0 | 1300 | 0.1844 | 0.0915 |
| 0.4123 | 14.0 | 1400 | 0.1849 | 0.0891 |
| 0.4002 | 15.0 | 1500 | 0.1901 | 0.0902 |
| 0.3983 | 16.0 | 1600 | 0.1879 | 0.0865 |
| 0.3907 | 17.0 | 1700 | 0.1863 | 0.0856 |
| 0.3969 | 18.0 | 1800 | 0.1773 | 0.0836 |
| 0.3721 | 19.0 | 1900 | 0.1834 | 0.0890 |
| 0.3987 | 20.0 | 2000 | 0.1817 | 0.0852 |
| 0.3863 | 21.0 | 2100 | 0.1898 | 0.0914 |
| 0.4052 | 22.0 | 2200 | 0.1882 | 0.0857 |
| 0.3811 | 23.0 | 2300 | 0.1874 | 0.0856 |
| 0.3791 | 24.0 | 2400 | 0.1932 | 0.0885 |
| 0.3919 | 25.0 | 2500 | 0.1847 | 0.0815 |
| 0.3891 | 26.0 | 2600 | 0.1850 | 0.0852 |
| 0.3719 | 27.0 | 2700 | 0.1774 | 0.0820 |
| 0.3791 | 28.0 | 2800 | 0.1756 | 0.0825 |
| 0.3537 | 29.0 | 2900 | 0.1797 | 0.0844 |
| 0.361 | 30.0 | 3000 | 0.1818 | 0.0834 |
| 0.3619 | 31.0 | 3100 | 0.1747 | 0.0838 |
| 0.3626 | 32.0 | 3200 | 0.1773 | 0.0844 |
| 0.3632 | 33.0 | 3300 | 0.1775 | 0.0825 |
| 0.3666 | 34.0 | 3400 | 0.1835 | 0.0859 |
| 0.3581 | 35.0 | 3500 | 0.1859 | 0.0868 |
| 0.3665 | 36.0 | 3600 | 0.1741 | 0.0849 |
| 0.3495 | 37.0 | 3700 | 0.1790 | 0.0837 |
| 0.3509 | 38.0 | 3800 | 0.1782 | 0.0841 |
| 0.3621 | 39.0 | 3900 | 0.1759 | 0.0841 |
| 0.3415 | 40.0 | 4000 | 0.1796 | 0.0851 |
| 0.3508 | 41.0 | 4100 | 0.1777 | 0.0821 |
| 0.3493 | 42.0 | 4200 | 0.1758 | 0.0829 |
| 0.359 | 43.0 | 4300 | 0.1788 | 0.0848 |
| 0.3438 | 44.0 | 4400 | 0.1782 | 0.0836 |
| 0.3642 | 45.0 | 4500 | 0.1732 | 0.0831 |
| 0.3456 | 46.0 | 4600 | 0.1768 | 0.0823 |
| 0.3532 | 47.0 | 4700 | 0.1735 | 0.0834 |
| 0.3448 | 48.0 | 4800 | 0.1755 | 0.0827 |
| 0.3487 | 49.0 | 4900 | 0.1767 | 0.0833 |
| 0.3427 | 50.0 | 5000 | 0.1774 | 0.0836 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
farleyknight/patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20 | farleyknight | 2022-09-23T02:53:23Z | 98 | 0 | transformers | [
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:farleyknight/big_patent_5_percent",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-20T21:32:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- farleyknight/big_patent_5_percent
metrics:
- rouge
model-index:
- name: patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20
results:
- task:
name: Summarization
type: summarization
dataset:
name: farleyknight/big_patent_5_percent
type: farleyknight/big_patent_5_percent
config: all
split: train
args: all
metrics:
- name: Rouge1
type: rouge
value: 37.3764
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20
This model is a fine-tuned version of [google/bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv) on the farleyknight/big_patent_5_percent dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2617
- Rouge1: 37.3764
- Rouge2: 13.2442
- Rougel: 26.011
- Rougelsum: 31.0145
- Gen Len: 113.8789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.6121 | 0.08 | 5000 | 2.5652 | 35.0673 | 12.0073 | 24.5471 | 28.9315 | 119.9866 |
| 2.5182 | 0.17 | 10000 | 2.4797 | 34.6909 | 11.6432 | 24.87 | 28.1543 | 119.2043 |
| 2.5102 | 0.25 | 15000 | 2.4238 | 35.8574 | 12.2402 | 25.0712 | 29.5607 | 115.2890 |
| 2.4292 | 0.33 | 20000 | 2.3869 | 36.0133 | 12.2453 | 25.4039 | 29.483 | 112.5920 |
| 2.3678 | 0.41 | 25000 | 2.3594 | 35.238 | 11.6833 | 25.0449 | 28.3313 | 119.1739 |
| 2.3511 | 0.5 | 30000 | 2.3326 | 36.7755 | 12.8394 | 25.7218 | 30.2594 | 110.5819 |
| 2.3334 | 0.58 | 35000 | 2.3125 | 36.6317 | 12.7493 | 25.5388 | 30.094 | 115.5998 |
| 2.3833 | 0.66 | 40000 | 2.2943 | 37.1219 | 13.1564 | 25.7571 | 30.8666 | 113.8222 |
| 2.341 | 0.75 | 45000 | 2.2813 | 36.4962 | 12.6225 | 25.6904 | 29.9741 | 115.9845 |
| 2.3179 | 0.83 | 50000 | 2.2725 | 37.3535 | 13.1596 | 25.7385 | 31.056 | 117.7754 |
| 2.3164 | 0.91 | 55000 | 2.2654 | 36.9191 | 12.9316 | 25.7586 | 30.4691 | 116.1670 |
| 2.3046 | 0.99 | 60000 | 2.2618 | 37.3992 | 13.2731 | 26.0327 | 31.0338 | 114.5195 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
burakyldrm/wav2vec2-burak-new-300-v2-2 | burakyldrm | 2022-09-23T02:05:18Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2022-09-22T11:55:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-burak-new-300-v2-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-burak-new-300-v2-2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6158
- Wer: 0.3094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 241
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 5.5201 | 8.62 | 500 | 3.1581 | 1.0 |
| 2.1532 | 17.24 | 1000 | 0.6883 | 0.5979 |
| 0.5465 | 25.86 | 1500 | 0.5028 | 0.4432 |
| 0.3287 | 34.48 | 2000 | 0.4986 | 0.4024 |
| 0.2571 | 43.1 | 2500 | 0.4920 | 0.3824 |
| 0.217 | 51.72 | 3000 | 0.5265 | 0.3724 |
| 0.1848 | 60.34 | 3500 | 0.5539 | 0.3714 |
| 0.1605 | 68.97 | 4000 | 0.5689 | 0.3670 |
| 0.1413 | 77.59 | 4500 | 0.5962 | 0.3501 |
| 0.1316 | 86.21 | 5000 | 0.5732 | 0.3494 |
| 0.1168 | 94.83 | 5500 | 0.5912 | 0.3461 |
| 0.1193 | 103.45 | 6000 | 0.5766 | 0.3378 |
| 0.0996 | 112.07 | 6500 | 0.5818 | 0.3403 |
| 0.0941 | 120.69 | 7000 | 0.5986 | 0.3315 |
| 0.0912 | 129.31 | 7500 | 0.5802 | 0.3280 |
| 0.0865 | 137.93 | 8000 | 0.5878 | 0.3290 |
| 0.0804 | 146.55 | 8500 | 0.5784 | 0.3228 |
| 0.0739 | 155.17 | 9000 | 0.5791 | 0.3180 |
| 0.0718 | 163.79 | 9500 | 0.5864 | 0.3146 |
| 0.0681 | 172.41 | 10000 | 0.6104 | 0.3178 |
| 0.0688 | 181.03 | 10500 | 0.5983 | 0.3160 |
| 0.0657 | 189.66 | 11000 | 0.6228 | 0.3203 |
| 0.0598 | 198.28 | 11500 | 0.6057 | 0.3122 |
| 0.0597 | 206.9 | 12000 | 0.6094 | 0.3129 |
| 0.0551 | 215.52 | 12500 | 0.6114 | 0.3127 |
| 0.0507 | 224.14 | 13000 | 0.6056 | 0.3094 |
| 0.0554 | 232.76 | 13500 | 0.6158 | 0.3094 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Dallasmorningstar/Yyy | Dallasmorningstar | 2022-09-22T23:34:28Z | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
]
| null | 2022-09-22T23:34:03Z | ---
license: afl-3.0
---
https://huggingface.co/julien-c/DPRNNTasNet-ks16_WHAM_sepclean
|
wenkai-li/new_classifer_epoch10 | wenkai-li | 2022-09-22T23:26:38Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T21:25:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: new_classifer_epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_classifer_epoch10
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0837
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0524 | 1.0 | 4248 | 0.0628 | 0.9790 |
| 0.0251 | 2.0 | 8496 | 0.0496 | 0.9848 |
| 0.0153 | 3.0 | 12744 | 0.0857 | 0.9837 |
| 0.0049 | 4.0 | 16992 | 0.1030 | 0.9849 |
| 0.0038 | 5.0 | 21240 | 0.0837 | 0.9867 |
| 0.003 | 6.0 | 25488 | 0.1165 | 0.9856 |
| 0.0026 | 7.0 | 29736 | 0.1143 | 0.9853 |
| 0.0004 | 8.0 | 33984 | 0.1475 | 0.9856 |
| 0.0004 | 9.0 | 38232 | 0.1328 | 0.9861 |
| 0.0 | 10.0 | 42480 | 0.1349 | 0.9862 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
neelmehta00/t5-base-finetuned-eli5 | neelmehta00 | 2022-09-22T23:16:27Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-09-22T15:04:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-base-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 14.5658
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-eli5
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1765
- Rouge1: 14.5658
- Rouge2: 2.2777
- Rougel: 11.2826
- Rougelsum: 13.1136
- Gen Len: 18.9938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.3398 | 1.0 | 17040 | 3.1765 | 14.5658 | 2.2777 | 11.2826 | 13.1136 | 18.9938 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
mehdidn/finetuned_bert_fa_zwnj_base_ner | mehdidn | 2022-09-22T21:42:36Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-02T21:27:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_parsBERT_NER_fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_parsBERT_NER_fa
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on the mixed NER dataset collected from ARMAN, PEYMA, and WikiANN.
It achieves the following results on the evaluation set:
- Loss: 0.0297
- Precision: 0.9481
- Recall: 0.9582
- F1: 0.9531
- Accuracy: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.12 | 1.0 | 1821 | 0.0543 | 0.8387 | 0.8577 | 0.8481 | 0.9830 |
| 0.0381 | 2.0 | 3642 | 0.0360 | 0.8941 | 0.9247 | 0.9091 | 0.9898 |
| 0.0168 | 3.0 | 5463 | 0.0282 | 0.9273 | 0.9452 | 0.9362 | 0.9927 |
| 0.0078 | 4.0 | 7284 | 0.0284 | 0.9391 | 0.9551 | 0.9470 | 0.9938 |
| 0.0033 | 5.0 | 9105 | 0.0297 | 0.9481 | 0.9582 | 0.9531 | 0.9942 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
JJRohan/ppo-LunarLander-v2 | JJRohan | 2022-09-22T21:12:36Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-22T21:12:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 169.43 +/- 77.42
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nlp-guild/bert-base-chinese-finetuned-intent_recognition-biomedical | nlp-guild | 2022-09-22T20:06:57Z | 136 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T19:42:37Z | fine-tuned bert-base-chinese for intent recognition task on [dataset](https://huggingface.co/datasets/nlp-guild/intent-recognition-biomedical)
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import TextClassificationPipeline
tokenizer = AutoTokenizer.from_pretrained("nlp-guild/bert-base-chinese-finetuned-intent_recognition-biomedical")
model = AutoModelForSequenceClassification.from_pretrained("nlp-guild/bert-base-chinese-finetuned-intent_recognition-biomedical")
nlp = TextClassificationPipeline(model = model, tokenizer = tokenizer)
label_set = [
'定义',
'病因',
'预防',
'临床表现(病症表现)',
'相关病症',
'治疗方法',
'所属科室',
'传染性',
'治愈率',
'禁忌',
'化验/体检方案',
'治疗时间',
'其他'
]
def readable_results(top_k:int, usr_query:str):
raw = nlp(usr_query, top_k = top_k)
def f(x):
index = int(x['label'][6:])
x['label'] = label_set[index]
for i in raw:
f(i)
return raw
readable_results(3,'得了心脏病怎么办')
'''
[{'label': '治疗方法', 'score': 0.9994503855705261},
{'label': '其他', 'score': 0.00018375989748165011},
{'label': '临床表现(病症表现)', 'score': 0.00010841596667887643}]
'''
``` |
TingChenChang/hpvqa-lcqmc-ocnli-cnsd-multi-MiniLM-v2 | TingChenChang | 2022-09-22T19:23:21Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-22T19:23:08Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 12 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 12,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
jayanta/twitter-roberta-base-sentiment-sentiment-memes-30epcohs | jayanta | 2022-09-22T19:04:33Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T14:38:21Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-roberta-base-sentiment-sentiment-memes-30epcohs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-sentiment-sentiment-memes-30epcohs
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3027
- Accuracy: 0.8517
- Precision: 0.8536
- Recall: 0.8517
- F1: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2504 | 1.0 | 2147 | 0.7129 | 0.8087 | 0.8112 | 0.8087 | 0.8036 |
| 0.2449 | 2.0 | 4294 | 0.7500 | 0.8229 | 0.8279 | 0.8229 | 0.8240 |
| 0.2652 | 3.0 | 6441 | 0.7460 | 0.8181 | 0.8185 | 0.8181 | 0.8149 |
| 0.2585 | 4.0 | 8588 | 0.7906 | 0.8155 | 0.8152 | 0.8155 | 0.8153 |
| 0.2534 | 5.0 | 10735 | 0.8178 | 0.8061 | 0.8180 | 0.8061 | 0.8080 |
| 0.2498 | 6.0 | 12882 | 0.8139 | 0.8166 | 0.8163 | 0.8166 | 0.8164 |
| 0.2825 | 7.0 | 15029 | 0.7494 | 0.8155 | 0.8210 | 0.8155 | 0.8168 |
| 0.2459 | 8.0 | 17176 | 0.8870 | 0.8061 | 0.8122 | 0.8061 | 0.8075 |
| 0.2303 | 9.0 | 19323 | 0.8699 | 0.7987 | 0.8060 | 0.7987 | 0.8003 |
| 0.2425 | 10.0 | 21470 | 0.8043 | 0.8244 | 0.8275 | 0.8244 | 0.8253 |
| 0.2143 | 11.0 | 23617 | 0.9163 | 0.8208 | 0.8251 | 0.8208 | 0.8219 |
| 0.2054 | 12.0 | 25764 | 0.8330 | 0.8239 | 0.8258 | 0.8239 | 0.8245 |
| 0.208 | 13.0 | 27911 | 1.0673 | 0.8134 | 0.8216 | 0.8134 | 0.8150 |
| 0.1668 | 14.0 | 30058 | 0.9071 | 0.8270 | 0.8276 | 0.8270 | 0.8273 |
| 0.1571 | 15.0 | 32205 | 0.9294 | 0.8339 | 0.8352 | 0.8339 | 0.8344 |
| 0.1857 | 16.0 | 34352 | 0.9909 | 0.8354 | 0.8350 | 0.8354 | 0.8352 |
| 0.1476 | 17.0 | 36499 | 0.9747 | 0.8433 | 0.8436 | 0.8433 | 0.8434 |
| 0.1341 | 18.0 | 38646 | 0.9372 | 0.8422 | 0.8415 | 0.8422 | 0.8415 |
| 0.1181 | 19.0 | 40793 | 1.0301 | 0.8433 | 0.8443 | 0.8433 | 0.8437 |
| 0.1192 | 20.0 | 42940 | 1.1332 | 0.8407 | 0.8415 | 0.8407 | 0.8410 |
| 0.0983 | 21.0 | 45087 | 1.2002 | 0.8428 | 0.8498 | 0.8428 | 0.8440 |
| 0.0951 | 22.0 | 47234 | 1.2141 | 0.8475 | 0.8504 | 0.8475 | 0.8483 |
| 0.0784 | 23.0 | 49381 | 1.1652 | 0.8407 | 0.8453 | 0.8407 | 0.8417 |
| 0.0623 | 24.0 | 51528 | 1.1730 | 0.8417 | 0.8443 | 0.8417 | 0.8425 |
| 0.054 | 25.0 | 53675 | 1.2900 | 0.8454 | 0.8496 | 0.8454 | 0.8464 |
| 0.0584 | 26.0 | 55822 | 1.2831 | 0.8480 | 0.8497 | 0.8480 | 0.8486 |
| 0.0531 | 27.0 | 57969 | 1.3043 | 0.8506 | 0.8524 | 0.8506 | 0.8512 |
| 0.0522 | 28.0 | 60116 | 1.2891 | 0.8527 | 0.8554 | 0.8527 | 0.8534 |
| 0.037 | 29.0 | 62263 | 1.3077 | 0.8538 | 0.8559 | 0.8538 | 0.8544 |
| 0.038 | 30.0 | 64410 | 1.3027 | 0.8517 | 0.8536 | 0.8517 | 0.8523 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.1
|
lizaboiarchuk/tiny-rubert-war-finetuned | lizaboiarchuk | 2022-09-22T17:27:24Z | 70 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-09-22T17:04:15Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: lizaboiarchuk/tiny-rubert-war-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lizaboiarchuk/tiny-rubert-war-finetuned
This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.7630
- Validation Loss: 3.4797
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -787, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1307 | 3.7059 | 0 |
| 4.0402 | 3.6937 | 1 |
| 3.9512 | 3.5754 | 2 |
| 3.8665 | 3.4710 | 3 |
| 3.7630 | 3.4797 | 4 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Chemsseddine/distilbert-base-uncased-finetuned-cola | Chemsseddine | 2022-09-22T15:31:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-07T17:23:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 2.1485 |
| No log | 2.0 | 10 | 2.0983 |
| No log | 3.0 | 15 | 2.0499 |
| No log | 4.0 | 20 | 2.0155 |
| No log | 5.0 | 25 | 2.0011 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
m-lin20/satellite-instrument-bert-NER | m-lin20 | 2022-09-22T13:32:42Z | 104 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
language: "pt"
widget:
- text: "Poised for launch in mid-2021, the joint NASA-USGS Landsat 9 mission will continue this important data record. In many respects Landsat 9 is a clone of Landsat-8. The Operational Land Imager-2 (OLI-2) is largely identical to Landsat 8 OLI, providing calibrated imagery covering the solar reflected wavelengths. The Thermal Infrared Sensor-2 (TIRS-2) improves upon Landsat 8 TIRS, addressing known issues including stray light incursion and a malfunction of the instrument scene select mirror. In addition, Landsat 9 adds redundancy to TIRS-2, thus upgrading the instrument to a 5-year design life commensurate with other elements of the mission. Initial performance testing of OLI-2 and TIRS-2 indicate that the instruments are of excellent quality and expected to match or improve on Landsat 8 data quality. "
example_title: "example 1"
- text: "Compared to its predecessor, Jason-3, the two AMR-C radiometer instruments have an external calibration system which enables higher radiometric stability accomplished by moving the secondary mirror between well-defined targets. Sentinel-6 allows continuing the study of the ocean circulation, climate change, and sea-level rise for at least another decade. Besides the external calibration for the AMR heritage radiometer (18.7, 23.8, and 34 GHz channels), the AMR-C contains a high-resolution microwave radiometer (HRMR) with radiometer channels at 90, 130, and 168 GHz. This subsystem allows for a factor of 5× higher spatial resolution at coastal transitions. This article presents a brief description of the instrument and the measured performance of the completed AMR-C-A and AMR-C-B instruments."
example_title: "example 2"
- text: "Landsat 9 will continue the Landsat data record into its fifth decade with a near-copy build of Landsat 8 with launch scheduled for December 2020. The two instruments on Landsat 9 are Thermal Infrared Sensor-2 (TIRS-2) and Operational Land Imager-2 (OLI-2)."
example_title: "example 3"
inference:
parameters:
aggregation_strategy: "first"
---
# satellite-instrument-bert-NER
For details, please visit the [GitHub link](https://github.com/THU-EarthInformationScienceLab/Satellite-Instrument-NER).
## Citation
Our [paper](https://www.tandfonline.com/doi/full/10.1080/17538947.2022.2107098) has been published in the International Journal of Digital Earth :
```bibtex
@article{lin2022satellite,
title={Satellite and instrument entity recognition using a pre-trained language model with distant supervision},
author={Lin, Ming and Jin, Meng and Liu, Yufu and Bai, Yuqi},
journal={International Journal of Digital Earth},
volume={15},
number={1},
pages={1290--1304},
year={2022},
publisher={Taylor \& Francis}
}
``` |
fxmarty/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic | fxmarty | 2022-09-22T13:28:21Z | 3 | 0 | transformers | [
"transformers",
"onnx",
"distilbert",
"text-classification",
"dataset:sst2",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-22T13:19:36Z | ---
license: apache-2.0
datasets:
- sst2
- glue
---
This model is a fork of https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english , quantized using dynamic Post-Training Quantization (PTQ) with ONNX Runtime and 🤗 Optimum library.
It achieves 0.901 on the validation set.
To load this model:
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
model = ORTModelForSequenceClassification.from_pretrained("fxmarty/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic")
```
|
rram12/ML-agents_pyramids | rram12 | 2022-09-22T12:22:53Z | 15 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2022-09-22T12:22:48Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: rram12/ML-agents_pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sd-concepts-library/bluebey-2 | sd-concepts-library | 2022-09-22T12:21:34Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T12:21:30Z | ---
license: mit
---
### Bluebey-2 on Stable Diffusion
This is the `<bluebey>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
muhtasham/bert-small-finetuned-finer-longer10 | muhtasham | 2022-09-22T11:51:56Z | 178 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-25T21:50:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-finetuned-finer-longer10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-finetuned-finer-longer10
This model is a fine-tuned version of [muhtasham/bert-small-finetuned-finer](https://huggingface.co/muhtasham/bert-small-finetuned-finer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5687 | 1.0 | 2433 | 1.5357 |
| 1.5081 | 2.0 | 4866 | 1.4759 |
| 1.4813 | 3.0 | 7299 | 1.4337 |
| 1.4453 | 4.0 | 9732 | 1.4084 |
| 1.4257 | 5.0 | 12165 | 1.3913 |
| 1.4155 | 6.0 | 14598 | 1.3855 |
| 1.4057 | 7.0 | 17031 | 1.3791 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finetuned-parsed20 | muhtasham | 2022-09-22T11:34:48Z | 179 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-17T13:31:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-parsed20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-parsed20
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.0763 |
| No log | 2.0 | 8 | 2.8723 |
| No log | 3.0 | 12 | 3.5102 |
| No log | 4.0 | 16 | 2.8641 |
| No log | 5.0 | 20 | 2.7827 |
| No log | 6.0 | 24 | 2.8163 |
| No log | 7.0 | 28 | 3.2415 |
| No log | 8.0 | 32 | 3.0477 |
| No log | 9.0 | 36 | 3.5160 |
| No log | 10.0 | 40 | 3.1248 |
| No log | 11.0 | 44 | 3.2159 |
| No log | 12.0 | 48 | 3.2177 |
| No log | 13.0 | 52 | 2.9108 |
| No log | 14.0 | 56 | 3.3758 |
| No log | 15.0 | 60 | 3.1335 |
| No log | 16.0 | 64 | 2.9753 |
| No log | 17.0 | 68 | 2.9922 |
| No log | 18.0 | 72 | 3.2798 |
| No log | 19.0 | 76 | 2.7280 |
| No log | 20.0 | 80 | 3.1193 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finetuned-parsed-longer50 | muhtasham | 2022-09-22T11:34:27Z | 179 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-08-17T13:39:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finetuned-finetuned-parsed-longer50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-finetuned-parsed-longer50
This model is a fine-tuned version of [muhtasham/bert-small-finetuned-parsed20](https://huggingface.co/muhtasham/bert-small-finetuned-parsed20) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 2.9807 |
| No log | 2.0 | 8 | 2.7267 |
| No log | 3.0 | 12 | 3.3484 |
| No log | 4.0 | 16 | 2.7573 |
| No log | 5.0 | 20 | 2.7063 |
| No log | 6.0 | 24 | 2.7353 |
| No log | 7.0 | 28 | 3.1290 |
| No log | 8.0 | 32 | 2.9371 |
| No log | 9.0 | 36 | 3.4265 |
| No log | 10.0 | 40 | 3.0537 |
| No log | 11.0 | 44 | 3.1382 |
| No log | 12.0 | 48 | 3.1454 |
| No log | 13.0 | 52 | 2.8379 |
| No log | 14.0 | 56 | 3.2760 |
| No log | 15.0 | 60 | 3.0504 |
| No log | 16.0 | 64 | 2.9001 |
| No log | 17.0 | 68 | 2.8892 |
| No log | 18.0 | 72 | 3.1837 |
| No log | 19.0 | 76 | 2.6404 |
| No log | 20.0 | 80 | 3.0600 |
| No log | 21.0 | 84 | 3.1432 |
| No log | 22.0 | 88 | 2.9608 |
| No log | 23.0 | 92 | 3.0513 |
| No log | 24.0 | 96 | 3.1038 |
| No log | 25.0 | 100 | 3.0975 |
| No log | 26.0 | 104 | 2.8977 |
| No log | 27.0 | 108 | 2.9416 |
| No log | 28.0 | 112 | 2.9015 |
| No log | 29.0 | 116 | 2.7947 |
| No log | 30.0 | 120 | 2.9278 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sherover125/newsclassifier | sherover125 | 2022-09-22T10:46:34Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-06-22T17:28:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: newsclassifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsclassifier
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1405
- Matthews Correlation: 0.9731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.2207 | 1.0 | 2397 | 0.1706 | 0.9595 |
| 0.0817 | 2.0 | 4794 | 0.1505 | 0.9663 |
| 0.0235 | 3.0 | 7191 | 0.1405 | 0.9731 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mahaveer/ppo-LunarLander-v2 | mahaveer | 2022-09-22T10:11:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-22T09:57:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 194.40 +/- 31.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GItaf/roberta-base-roberta-base-TF-weight1-epoch10 | GItaf | 2022-09-22T09:35:57Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-22T09:34:27Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-roberta-base-TF-weight1-epoch10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-roberta-base-TF-weight1-epoch10
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/roberta-base-roberta-base-TF-weight1-epoch15 | GItaf | 2022-09-22T09:23:00Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-21T15:32:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-roberta-base-TF-weight1-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-roberta-base-TF-weight1-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8322
- Cls loss: 0.6900
- Lm loss: 4.1423
- Cls Accuracy: 0.5401
- Cls F1: 0.3788
- Cls Precision: 0.2917
- Cls Recall: 0.5401
- Perplexity: 62.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 5.3158 | 1.0 | 3470 | 4.9858 | 0.6910 | 4.2949 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 73.32 |
| 4.9772 | 2.0 | 6940 | 4.8876 | 0.6956 | 4.1920 | 0.4599 | 0.2898 | 0.2115 | 0.4599 | 66.15 |
| 4.8404 | 3.0 | 10410 | 4.8454 | 0.6901 | 4.1553 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 63.77 |
| 4.7439 | 4.0 | 13880 | 4.8177 | 0.6904 | 4.1274 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.02 |
| 4.6667 | 5.0 | 17350 | 4.8065 | 0.6903 | 4.1163 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.33 |
| 4.6018 | 6.0 | 20820 | 4.8081 | 0.6963 | 4.1119 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.06 |
| 4.5447 | 7.0 | 24290 | 4.8089 | 0.6912 | 4.1177 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.42 |
| 4.4944 | 8.0 | 27760 | 4.8128 | 0.6900 | 4.1228 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.73 |
| 4.4505 | 9.0 | 31230 | 4.8152 | 0.6905 | 4.1248 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.85 |
| 4.4116 | 10.0 | 34700 | 4.8129 | 0.6908 | 4.1221 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.69 |
| 4.3787 | 11.0 | 38170 | 4.8146 | 0.6906 | 4.1241 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 61.81 |
| 4.3494 | 12.0 | 41640 | 4.8229 | 0.6900 | 4.1329 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.36 |
| 4.3253 | 13.0 | 45110 | 4.8287 | 0.6900 | 4.1388 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.73 |
| 4.3075 | 14.0 | 48580 | 4.8247 | 0.6900 | 4.1347 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.47 |
| 4.2936 | 15.0 | 52050 | 4.8322 | 0.6900 | 4.1423 | 0.5401 | 0.3788 | 0.2917 | 0.5401 | 62.95 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
GItaf/gpt2-gpt2-TF-weight1-epoch15 | GItaf | 2022-09-22T09:21:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-21T15:31:41Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-TF-weight1-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-TF-weight1-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0647
- Cls loss: 2.1295
- Lm loss: 3.9339
- Cls Accuracy: 0.8375
- Cls F1: 0.8368
- Cls Precision: 0.8381
- Cls Recall: 0.8375
- Perplexity: 51.11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Cls loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Lm loss | Perplexity | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:------------:|:------:|:-------------:|:----------:|:-------:|:----------:|:---------------:|
| 4.8702 | 1.0 | 3470 | 0.6951 | 0.7752 | 0.7670 | 0.7978 | 0.7752 | 4.0201 | 55.71 | 4.7157 |
| 4.5856 | 2.0 | 6940 | 0.6797 | 0.8352 | 0.8333 | 0.8406 | 0.8352 | 3.9868 | 53.88 | 4.6669 |
| 4.4147 | 3.0 | 10410 | 0.6899 | 0.8375 | 0.8368 | 0.8384 | 0.8375 | 3.9716 | 53.07 | 4.6619 |
| 4.2479 | 4.0 | 13880 | 0.8678 | 0.8403 | 0.8396 | 0.8413 | 0.8403 | 3.9622 | 52.57 | 4.8305 |
| 4.1281 | 5.0 | 17350 | 0.9747 | 0.8340 | 0.8334 | 0.8346 | 0.8340 | 3.9596 | 52.44 | 4.9349 |
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity|
|:-------------:|:-----:|:-----:|:--------:|:------------:|:------:|:-------------:|:----------:|:-------:|:----------:|:---------------:|
| 4.195 | 6.0 | 20820 | 4.9303 | 0.9770 | 3.9528 | 0.8300 | 0.8299 | 0.8299 | 0.8300 | 52.08 |
| 4.0645 | 7.0 | 24290 | 5.0425 | 1.0979 | 3.9440 | 0.8317 | 0.8313 | 0.8317 | 0.8317 | 51.62 |
| 3.9637 | 8.0 | 27760 | 5.3955 | 1.4533 | 3.9414 | 0.8329 | 0.8325 | 0.8328 | 0.8329 | 51.49 |
| 3.9094 | 9.0 | 31230 | 5.6029 | 1.6645 | 3.9375 | 0.8231 | 0.8233 | 0.8277 | 0.8231 | 51.29 |
| 3.8661 | 10.0 | 34700 | 5.8175 | 1.8821 | 3.9344 | 0.8144 | 0.8115 | 0.8222 | 0.8144 | 51.13 |
| 3.8357 | 11.0 | 38170 | 5.6824 | 1.7494 | 3.9319 | 0.8340 | 0.8336 | 0.8342 | 0.8340 | 51.01 |
| 3.8019 | 12.0 | 41640 | 5.8509 | 1.9167 | 3.9332 | 0.8369 | 0.8357 | 0.8396 | 0.8369 | 51.07 |
| 3.7815 | 13.0 | 45110 | 5.9044 | 1.9686 | 3.9346 | 0.8409 | 0.8407 | 0.8408 | 0.8409 | 51.14 |
| 3.7662 | 14.0 | 48580 | 6.0088 | 2.0738 | 3.9337 | 0.8363 | 0.8359 | 0.8364 | 0.8363 | 51.10 |
| 3.7524 | 15.0 | 52050 | 6.0647 | 2.1295 | 3.9339 | 0.8375 | 0.8368 | 0.8381 | 0.8375 | 51.11 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
chintagunta85/electramed-small-deid2014-ner-v5-classweights | chintagunta85 | 2022-09-22T09:08:27Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:i2b22014",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-22T07:48:30Z | ---
tags:
- generated_from_trainer
datasets:
- i2b22014
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-deid2014-ner-v5-classweights
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: i2b22014
type: i2b22014
config: i2b22014-deid
split: train
args: i2b22014-deid
metrics:
- name: Precision
type: precision
value: 0.8832236842105263
- name: Recall
type: recall
value: 0.6910561632502987
- name: F1
type: f1
value: 0.7754112732711052
- name: Accuracy
type: accuracy
value: 0.9883040491052534
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-deid2014-ner-v5-classweights
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the i2b22014 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Precision: 0.8832
- Recall: 0.6911
- F1: 0.7754
- Accuracy: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0001 | 1.0 | 1838 | 0.0008 | 0.7702 | 0.3780 | 0.5071 | 0.9771 |
| 0.0 | 2.0 | 3676 | 0.0007 | 0.8753 | 0.5671 | 0.6883 | 0.9827 |
| 0.0 | 3.0 | 5514 | 0.0006 | 0.8074 | 0.4128 | 0.5463 | 0.9775 |
| 0.0 | 4.0 | 7352 | 0.0007 | 0.8693 | 0.6102 | 0.7170 | 0.9848 |
| 0.0 | 5.0 | 9190 | 0.0006 | 0.8710 | 0.6022 | 0.7121 | 0.9849 |
| 0.0 | 6.0 | 11028 | 0.0007 | 0.8835 | 0.6547 | 0.7521 | 0.9867 |
| 0.0 | 7.0 | 12866 | 0.0009 | 0.8793 | 0.6661 | 0.7579 | 0.9873 |
| 0.0 | 8.0 | 14704 | 0.0008 | 0.8815 | 0.6740 | 0.7639 | 0.9876 |
| 0.0 | 9.0 | 16542 | 0.0009 | 0.8812 | 0.6851 | 0.7709 | 0.9880 |
| 0.0 | 10.0 | 18380 | 0.0009 | 0.8832 | 0.6911 | 0.7754 | 0.9883 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
prakashkmr48/Prompt-image-inpainting | prakashkmr48 | 2022-09-22T08:58:57Z | 0 | 0 | null | [
"region:us"
]
| null | 2022-09-22T08:51:46Z | git lfs install
git clone https://huggingface.co/prakashkmr48/Prompt-image-inpainting |
Hoax0930/kyoto_marian | Hoax0930 | 2022-09-22T08:32:43Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2022-09-22T07:47:04Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: kyoto_marian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kyoto_marian
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1941
- Bleu: 13.4500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/ghostproject-men | sd-concepts-library | 2022-09-22T07:36:08Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T07:36:02Z | ---
license: mit
---
### ghostproject-men on Stable Diffusion
This is the `<ghostsproject-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
sd-concepts-library/pool-test | sd-concepts-library | 2022-09-22T06:53:48Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T06:53:43Z | ---
license: mit
---
### Pool test on Stable Diffusion
This is the `<pool_test>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
chintagunta85/electramed-small-deid2014-ner-v4 | chintagunta85 | 2022-09-22T06:33:10Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:i2b22014",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-09-22T05:55:58Z | ---
tags:
- generated_from_trainer
datasets:
- i2b22014
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: electramed-small-deid2014-ner-v4
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: i2b22014
type: i2b22014
config: i2b22014-deid
split: train
args: i2b22014-deid
metrics:
- name: Precision
type: precision
value: 0.7571112095702259
- name: Recall
type: recall
value: 0.7853663020498207
- name: F1
type: f1
value: 0.770979967514889
- name: Accuracy
type: accuracy
value: 0.9906153616114308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electramed-small-deid2014-ner-v4
This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the i2b22014 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0362
- Precision: 0.7571
- Recall: 0.7854
- F1: 0.7710
- Accuracy: 0.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0143 | 1.0 | 1838 | 0.1451 | 0.3136 | 0.3463 | 0.3291 | 0.9700 |
| 0.0033 | 2.0 | 3676 | 0.0940 | 0.4293 | 0.4861 | 0.4559 | 0.9758 |
| 0.0014 | 3.0 | 5514 | 0.0725 | 0.4906 | 0.5766 | 0.5301 | 0.9799 |
| 0.0007 | 4.0 | 7352 | 0.0568 | 0.6824 | 0.7022 | 0.6921 | 0.9860 |
| 0.0112 | 5.0 | 9190 | 0.0497 | 0.6966 | 0.7400 | 0.7177 | 0.9870 |
| 0.0002 | 6.0 | 11028 | 0.0442 | 0.7126 | 0.7549 | 0.7332 | 0.9878 |
| 0.0002 | 7.0 | 12866 | 0.0404 | 0.7581 | 0.7591 | 0.7586 | 0.9896 |
| 0.0002 | 8.0 | 14704 | 0.0376 | 0.7540 | 0.7804 | 0.7670 | 0.9904 |
| 0.0002 | 9.0 | 16542 | 0.0367 | 0.7548 | 0.7825 | 0.7684 | 0.9905 |
| 0.0001 | 10.0 | 18380 | 0.0362 | 0.7571 | 0.7854 | 0.7710 | 0.9906 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/test2 | sd-concepts-library | 2022-09-22T06:29:49Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T06:29:45Z | ---
license: mit
---
### TEST2 on Stable Diffusion
This is the `<AIOCARD>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
sd-concepts-library/bee | sd-concepts-library | 2022-09-22T05:01:07Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T05:00:56Z | ---
license: mit
---
### BEE on Stable Diffusion
This is the `<b-e-e>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/yinit | sd-concepts-library | 2022-09-22T04:58:38Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T04:58:24Z | ---
license: mit
---
### yinit on Stable Diffusion
This is the `yinit-dropcap` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


























|
sd-concepts-library/million-live-spade-q-object-3k | sd-concepts-library | 2022-09-22T04:34:40Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T04:34:30Z | ---
license: mit
---
### million-live-spade-q-object-3k on Stable Diffusion
This is the `<spade_q>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:













|
sd-concepts-library/million-live-akane-shifuku-3k | sd-concepts-library | 2022-09-22T03:28:33Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T03:28:22Z | ---
license: mit
---
### million-live-akane-shifuku-3k on Stable Diffusion
This is the `<akane>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
ashiqabdulkhader/GPT2-Poet | ashiqabdulkhader | 2022-09-22T03:24:00Z | 381 | 3 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-22T02:45:20Z | ---
license: bigscience-bloom-rail-1.0
widget :
- text: "I used to have a lover"
example_title: "I used to have a lover"
- text : "The old cupola glinted above the clouds"
example_title: "The old cupola"
- text : "Behind the silo, the Mother Rabbit hunches"
example_title : "Behind the silo"
---
# GPT2-Poet
## Model description
GPT2-Poet is a GPT-2 transformer model fine Tuned on a large corpus of English Poems dataset in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
## Usage
You can use this model for English Poem generation:
```python
>>> from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
>>> tokenizer = GPT2Tokenizer.from_pretrained("ashiqabdulkhader/GPT2-Poet")
>>> model = TFGPT2LMHeadModel.from_pretrained("ashiqabdulkhader/GPT2-Poet")
>>> text = "The quick brown fox"
>>> input_ids = tokenizer.encode(text, return_tensors='tf')
>>> sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=100,
top_k=0,
top_p=0.9,
temperature=1.0,
num_return_sequences=3
)
>>> print("Output:", tokenizer.decode(sample_outputs[0], skip_special_tokens=True))
```
|
yuntian-deng/latex2im_ss | yuntian-deng | 2022-09-22T02:20:24Z | 1 | 0 | diffusers | [
"diffusers",
"en",
"dataset:yuntian-deng/im2latex-100k",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-22T02:19:32Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: yuntian-deng/im2latex-100k
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# latex2im_ss
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `yuntian-deng/im2latex-100k` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/yuntian-deng/latex2im_ss/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-drusen-2000-128 | g30rv17ys | 2022-09-22T01:53:45Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
]
| null | 2022-09-21T18:22:01Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-drusen-2000-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-drusen-2000-128/tensorboard?#scalars)
|
hwangt/donut-base-sroie | hwangt | 2022-09-22T01:45:38Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2022-09-22T01:10:38Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/gba-pokemon-sprites | sd-concepts-library | 2022-09-22T00:48:32Z | 0 | 30 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T00:48:25Z | ---
license: mit
---
### GBA Pokemon Sprites on Stable Diffusion
This is the `<GBA-Poke-Sprites>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


































































































































































































































































































































































































|
sd-concepts-library/sherhook-painting-v2 | sd-concepts-library | 2022-09-22T00:30:50Z | 0 | 4 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-22T00:30:44Z | ---
license: mit
---
### Sherhook Painting v2 on Stable Diffusion
This is the `<sherhook>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:









|
research-backup/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification | research-backup | 2022-09-21T23:50:03Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T23:18:14Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7550595238095238
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5133689839572193
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.516320474777448
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5958866036687048
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.748
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4605263157894737
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5231481481481481
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9025161970769926
- name: F1 (macro)
type: f1_macro
value: 0.8979165451427438
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8328638497652581
- name: F1 (macro)
type: f1_macro
value: 0.6469572777603673
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6630552546045504
- name: F1 (macro)
type: f1_macro
value: 0.6493250582245075
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9562495652778744
- name: F1 (macro)
type: f1_macro
value: 0.8695137253747418
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8906298965841429
- name: F1 (macro)
type: f1_macro
value: 0.8885946595123109
---
# relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5133689839572193
- Accuracy on SAT: 0.516320474777448
- Accuracy on BATS: 0.5958866036687048
- Accuracy on U2: 0.4605263157894737
- Accuracy on U4: 0.5231481481481481
- Accuracy on Google: 0.748
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9025161970769926
- Micro F1 score on CogALexV: 0.8328638497652581
- Micro F1 score on EVALution: 0.6630552546045504
- Micro F1 score on K&H+N: 0.9562495652778744
- Micro F1 score on ROOT09: 0.8906298965841429
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7550595238095238
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 30
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
facebook/spar-marco-unicoil-lexmodel-context-encoder | facebook | 2022-09-21T23:44:07Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2110.06918",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T23:26:34Z | ---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the context encoder of the MS MARCO UniCOIL Lexical Model (Λ) from the SPAR paper:
[Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?](https://arxiv.org/abs/2110.06918)
<br>
Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta and Wen-tau Yih
<br>
**Meta AI**
The associated github repo is available here: https://github.com/facebookresearch/dpr-scale/tree/main/spar
This model is a BERT-base sized dense retriever trained on the MS MARCO corpus to imitate the behavior of [UniCOIL](https://github.com/castorini/pyserini/blob/master/docs/experiments-unicoil.md), a sparse retriever.
The following models are also available:
Pretrained Model | Corpus | Teacher | Architecture | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
Wiki BM25 Λ | Wikipedia | BM25 | BERT-base | facebook/spar-wiki-bm25-lexmodel-query-encoder | facebook/spar-wiki-bm25-lexmodel-context-encoder
PAQ BM25 Λ | PAQ | BM25 | BERT-base | facebook/spar-paq-bm25-lexmodel-query-encoder | facebook/spar-paq-bm25-lexmodel-context-encoder
MARCO BM25 Λ | MS MARCO | BM25 | BERT-base | facebook/spar-marco-bm25-lexmodel-query-encoder | facebook/spar-marco-bm25-lexmodel-context-encoder
MARCO UniCOIL Λ | MS MARCO | UniCOIL | BERT-base | facebook/spar-marco-unicoil-lexmodel-query-encoder | facebook/spar-marco-unicoil-lexmodel-context-encoder
# Using the Lexical Model (Λ) Alone
This model should be used together with the associated query encoder, similar to the [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr) model.
```
import torch
from transformers import AutoTokenizer, AutoModel
# The tokenizer is the same for the query and context encoder
tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 341.3268
score2 = query_emb @ ctx_emb[1] # 340.1626
```
# Using the Lexical Model (Λ) with a Base Dense Retriever as in SPAR
As Λ learns lexical matching from a sparse teacher retriever, it can be used in combination with a standard dense retriever (e.g. [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr#dpr), [Contriever](https://huggingface.co/facebook/contriever-msmarco)) to build a dense retriever that excels at both lexical and semantic matching.
In the following example, we show how to build the SPAR-Wiki model for Open-Domain Question Answering by concatenating the embeddings of DPR and the Wiki BM25 Λ.
```
import torch
from transformers import AutoTokenizer, AutoModel
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
# DPR model
dpr_ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_query_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
dpr_query_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
# Wiki BM25 Λ model
lexmodel_tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Compute DPR embeddings
dpr_query_input = dpr_query_tokenizer(query, return_tensors='pt')['input_ids']
dpr_query_emb = dpr_query_encoder(dpr_query_input).pooler_output
dpr_ctx_input = dpr_ctx_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
dpr_ctx_emb = dpr_ctx_encoder(**dpr_ctx_input).pooler_output
# Compute Λ embeddings
lexmodel_query_input = lexmodel_tokenizer(query, return_tensors='pt')
lexmodel_query_emb = lexmodel_query_encoder(**query_input).last_hidden_state[:, 0, :]
lexmodel_ctx_input = lexmodel_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
lexmodel_ctx_emb = lexmodel_context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Form SPAR embeddings via concatenation
# The concatenation weight is only applied to query embeddings
# Refer to the SPAR paper for details
concat_weight = 0.7
spar_query_emb = torch.cat(
[dpr_query_emb, concat_weight * lexmodel_query_emb],
dim=-1,
)
spar_ctx_emb = torch.cat(
[dpr_ctx_emb, lexmodel_ctx_emb],
dim=-1,
)
# Compute similarity scores
score1 = spar_query_emb @ spar_ctx_emb[0] # 317.6931
score2 = spar_query_emb @ spar_ctx_emb[1] # 314.6144
```
|
rram12/dqn-SpaceInvadersNoFrameskip-v4 | rram12 | 2022-09-21T23:34:19Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-21T23:33:50Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 567.00 +/- 231.15
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rram12 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rram12
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
espnet/jiyangtang_magicdata_asr_conformer_lm_transformer | espnet | 2022-09-21T23:17:26Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:magicdata",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
]
| automatic-speech-recognition | 2022-09-21T23:15:28Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- magicdata
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/jiyangtang_magicdata_asr_conformer_lm_transformer`
This model was trained by Jiyang Tang using magicdata recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 9d0f3b3e1be6650d38cc5008518f445308fe06d9
pip install -e .
cd egs2/magicdata/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/jiyangtang_magicdata_asr_conformer_lm_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Sep 21 01:11:58 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202207`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `9d0f3b3e1be6650d38cc5008518f445308fe06d9`
- Commit date: `Mon Sep 19 20:27:41 2022 -0400`
## asr_train_asr_raw_zh_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|24279|24286|84.4|15.6|0.0|0.0|15.6|15.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_train_lm_transformer_zh_char_valid.loss.ave_asr_model_valid.acc.ave/test|24279|243325|96.4|1.7|2.0|0.1|3.7|15.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_raw_zh_char_sp
ngpu: 0
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 20
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 20000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char_sp/train/speech_shape
- exp/asr_stats_raw_zh_char_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char_sp/valid/speech_shape
- exp/asr_stats_raw_zh_char_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_noeng_sp/wav.scp
- speech
- sound
- - dump/raw/train_noeng_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 的
- 我
- 一
- 歌
- 你
- 天
- 不
- 了
- 放
- 来
- 播
- 下
- 个
- 是
- 有
- 给
- 首
- 好
- 请
- 在
- 听
- 么
- 气
- 要
- 想
- 曲
- 上
- 吗
- 去
- 到
- 这
- 啊
- 点
- 那
- 没
- 就
- 说
- 大
- 唱
- 人
- 最
- 第
- 看
- 会
- 明
- 集
- 吧
- 音
- 还
- 乐
- 今
- 电
- 开
- 能
- 度
- 哪
- 里
- 多
- 打
- 十
- 可
- 怎
- 道
- 什
- 新
- 雨
- 以
- 家
- 回
- 话
- 儿
- 他
- 时
- 小
- 温
- 样
- 爱
- 都
- 吃
- 呢
- 知
- 谁
- 为
- 子
- 们
- 也
- 过
- 老
- 很
- 出
- 中
- 现
- 冷
- 和
- 情
- 行
- 心
- 发
- 专
- 几
- 视
- 张
- 事
- 二
- 辑
- 五
- 三
- 后
- 找
- 些
- 早
- 学
- 晚
- 车
- 别
- 演
- 手
- 呀
- 调
- 感
- 问
- 九
- 饭
- 快
- 风
- 得
- 如
- 自
- 生
- 少
- 地
- 用
- 叫
- 帮
- 机
- 台
- 班
- 欢
- 候
- 起
- 等
- 把
- 年
- 干
- 高
- 太
- 啦
- 方
- 提
- 面
- 八
- 四
- 信
- 意
- 王
- 真
- 求
- 热
- 喜
- 觉
- 周
- 近
- 名
- 做
- 公
- 告
- 关
- 六
- 字
- 安
- 再
- 变
- 间
- 国
- 分
- 着
- 哈
- 水
- 节
- 只
- 动
- 北
- 刚
- 空
- 月
- 玩
- 让
- 伤
- 东
- 谢
- 网
- 七
- 见
- 之
- 比
- 杰
- 又
- 买
- 对
- 始
- 无
- 查
- 声
- 文
- 经
- 醒
- 美
- 西
- 哦
- 走
- 两
- 海
- 妈
- 李
- 报
- 诉
- 接
- 定
- 午
- 外
- 才
- 流
- 长
- 宝
- 门
- 收
- 己
- 室
- 林
- 种
- 南
- 日
- 目
- 陈
- 许
- 词
- 服
- 设
- 记
- 频
- 琴
- 主
- 完
- 友
- 花
- 跟
- 钱
- 睡
- 像
- 嗯
- 何
- 京
- 所
- 预
- 边
- 带
- 作
- 零
- 头
- 号
- 果
- 嘛
- 路
- 办
- 吉
- 语
- 本
- 合
- 卫
- 影
- 市
- 摄
- 通
- 加
- 女
- 成
- 因
- 前
- 衣
- 然
- 档
- 位
- 聊
- 哥
- 载
- 原
- <space>
- 思
- 氏
- 同
- 题
- 但
- 红
- 火
- 她
- 亲
- 传
- 江
- 清
- 息
- 注
- 死
- 啥
- 州
- 片
- 朋
- 相
- 星
- 华
- 已
- 负
- 白
- 色
- 姐
- 春
- 转
- 半
- 换
- 黄
- 游
- 工
- 法
- 理
- 山
- 该
- 英
- 较
- 先
- 穿
- 推
- 直
- 力
- 当
- 冻
- 费
- 刘
- 男
- 写
- 场
- 呵
- 克
- 正
- 单
- 身
- 系
- 苏
- 婆
- 难
- 阳
- 光
- 重
- 荐
- 越
- 马
- 城
- 错
- 次
- 期
- 口
- 金
- 线
- 准
- 爸
- 忙
- 体
- 于
- 句
- 广
- 福
- 活
- 应
- 亮
- 黑
- 特
- 司
- 喝
- 式
- 飞
- 介
- 者
- 慢
- 静
- 百
- 平
- 绍
- 差
- 照
- 团
- 烦
- 便
- 师
- 站
- 德
- 短
- 远
- 需
- 谱
- 郑
- 化
- 或
- 器
- 急
- 钢
- 您
- 忘
- 店
- 妹
- 梦
- 青
- 适
- 总
- 每
- 业
- 夜
- 神
- 版
- 健
- 区
- 实
- 从
- 孩
- 奏
- 韩
- 伦
- 志
- 算
- 雪
- 世
- 认
- 眼
- 模
- 全
- 与
- 书
- 拿
- 送
- 结
- 其
- 解
- 格
- 洗
- 幸
- 舞
- 望
- 速
- 试
- 钟
- 内
- 联
- 停
- 丽
- 课
- 河
- 沙
- 笑
- 久
- 永
- 贝
- 民
- 址
- 超
- 教
- 代
- 件
- 降
- 脑
- 恋
- 常
- 交
- 低
- 伙
- 而
- 毛
- 阿
- 齐
- 习
- 量
- 段
- 选
- 欣
- 昨
- 进
- 闻
- 住
- 受
- 类
- 酒
- 背
- 藏
- 暴
- 摇
- 云
- 怕
- 考
- 咋
- 武
- 赶
- 孙
- 识
- 嵩
- 景
- 某
- 省
- 界
- 罗
- 任
- 坐
- 级
- 遇
- 麻
- 县
- 被
- 龙
- 品
- 蛋
- 湖
- 离
- 希
- 卖
- 轻
- 岁
- 香
- 赏
- 忆
- 答
- 滚
- 保
- 运
- 深
- 央
- 更
- 况
- 部
- ,
- 猪
- 休
- 校
- 留
- 嘿
- 弹
- 挺
- 院
- 泪
- 拉
- 懂
- 暖
- 讲
- 顺
- 底
- 卡
- 使
- 表
- 剧
- 包
- 故
- 导
- 凉
- 连
- 咱
- 制
- 蔡
- 容
- 向
- 物
- 微
- 步
- 切
- 搜
- 婚
- 童
- 约
- 芳
- 凯
- 复
- 未
- 陪
- 防
- 典
- 夏
- 万
- 备
- 指
- 冰
- 管
- 基
- 琪
- 宇
- 晓
- 房
- 良
- 戏
- 悲
- 牛
- 千
- 达
- 汉
- 拜
- 奇
- 梅
- 菜
- 满
- 徐
- 楼
- 询
- 图
- 改
- 练
- 敬
- 票
- 吴
- 络
- 码
- 整
- 简
- 队
- 购
- 普
- 附
- 响
- 胡
- 装
- 暑
- 非
- 喂
- 消
- 浪
- 凤
- 愿
- 累
- 球
- 聚
- 启
- 假
- 潮
- 弟
- 玉
- 绿
- 康
- 拍
- 失
- 哭
- 易
- 木
- 斯
- 跳
- 军
- 处
- 搞
- 升
- 除
- 傻
- 骗
- 证
- 杨
- 园
- 茹
- 赵
- 标
- 窗
- 庆
- 惠
- 够
- 烟
- 俊
- 掉
- 建
- 呗
- 插
- 座
- 害
- 智
- 贵
- 左
- 落
- 计
- 客
- 宁
- 梁
- 舒
- 取
- 往
- 漫
- 兰
- 战
- 随
- 晴
- 条
- 入
- 叶
- 强
- 伟
- 雅
- 尔
- 树
- 余
- 弄
- 季
- 排
- 伍
- 吹
- 宏
- 商
- 柔
- 郊
- 铁
- 遍
- 确
- 闭
- 雄
- 似
- 冒
- 待
- 尘
- 群
- 病
- 退
- 务
- 育
- 坏
- 娘
- 莫
- 资
- 楚
- 辛
- 索
- 利
- 数
- 秦
- 燕
- 且
- 录
- 姑
- 念
- 痛
- 冬
- 尾
- 共
- 初
- 粤
- 哎
- 印
- 示
- 抱
- 终
- 泉
- 货
- 肯
- 它
- 伞
- 性
- 古
- 跑
- 腾
- 鱼
- 曾
- 源
- 银
- 读
- 油
- 川
- 言
- 倩
- 峰
- 激
- 置
- 灯
- 独
- 命
- 谈
- 苦
- 限
- 乡
- 菲
- 伴
- 将
- 震
- 炎
- 散
- 依
- 米
- 及
- 贞
- 兴
- 湿
- 寒
- 敏
- 否
- 俩
- 祝
- 慧
- 精
- 律
- 功
- 托
- 洋
- 敢
- 街
- 铃
- 必
- 弦
- 寻
- 涵
- 突
- 皮
- 反
- 烧
- 秋
- 刮
- 末
- 双
- 细
- 范
- 由
- 君
- 款
- 邮
- 醉
- 紧
- 哲
- 缘
- 岛
- 疼
- 阴
- 旋
- 怪
- 草
- 持
- 狼
- 具
- 至
- 汪
- 鸡
- 医
- 邓
- 份
- 右
- 密
- 士
- 修
- 亚
- 画
- 灵
- 妇
- 甜
- 靠
- 荣
- 程
- 莲
- 魂
- 此
- 户
- 属
- 贤
- 充
- 萧
- 血
- 逼
- 闹
- 吸
- 娜
- 肉
- 抒
- 价
- 桥
- 剑
- 巴
- 暗
- 豆
- 迪
- 戴
- 迅
- 朝
- 艺
- 谭
- 治
- 祥
- 尽
- 闷
- 宫
- 艳
- 父
- 存
- 媳
- 跪
- 雾
- 杜
- 味
- 奕
- 兵
- 脸
- 炫
- 兄
- 妮
- 优
- 熊
- 床
- 般
- 净
- 航
- 帝
- 刻
- 孤
- 轩
- 村
- 支
- 玮
- 狗
- 纯
- 楠
- 呐
- 冠
- 元
- 盛
- 决
- 诗
- 爷
- 堵
- 陶
- 乖
- 迷
- 羽
- 忧
- 倒
- 蜜
- 晒
- 仔
- 却
- 姜
- 哟
- 餐
- 雷
- 鸟
- 馆
- 韶
- 箱
- 操
- 乌
- 借
- 恒
- 舍
- 药
- 块
- 澡
- 石
- 软
- 奶
- 笨
- 夫
- 朴
- 义
- 派
- 晨
- 佳
- 科
- 姿
- 显
- 咏
- 饿
- 付
- 宗
- 键
- 止
- 员
- 磊
- 勤
- 崔
- 偏
- 额
- 免
- 乱
- 怀
- 侠
- 岳
- 斌
- 助
- 征
- 概
- 吕
- 彩
- 板
- 松
- 各
- 组
- 历
- 济
- 象
- 茶
- 领
- 按
- 创
- 镇
- 翻
- 配
- 宿
- 咯
- 帅
- 型
- 估
- 佩
- 惜
- 详
- 续
- 蓝
- 麟
- 珠
- 颜
- 彦
- 农
- 盘
- 母
- 鞋
- 账
- 博
- 礼
- 环
- 套
- 效
- 郭
- 居
- 佑
- 根
- 惊
- 圳
- 叔
- 若
- 逆
- 鸿
- 锁
- 食
- 芸
- 裤
- 娱
- 漂
- 野
- 麦
- 豫
- 顾
- 爽
- 族
- 仙
- 围
- 观
- 链
- 嗨
- 厅
- 巍
- 劲
- 极
- 呼
- 咖
- 淑
- 丝
- 昌
- 嘉
- 绝
- 史
- 击
- 承
- 蔚
- 堂
- 沉
- 笔
- 朵
- 凰
- 琥
- 匆
- 炜
- 输
- 须
- 娴
- 嘻
- 牌
- 田
- 杀
- 滴
- 鬼
- 桦
- 赛
- 玟
- 抽
- 案
- 轮
- 立
- 摆
- 屋
- 诺
- 丁
- 佰
- 蒙
- 澄
- 羊
- 添
- 质
- 波
- 萨
- 狂
- 丹
- 屁
- 角
- 章
- 产
- 宜
- 笛
- 严
- 维
- 测
- 娃
- 料
- 宋
- 洲
- 卦
- 猜
- 港
- 挂
- 淘
- 郁
- 统
- 断
- 锅
- 稍
- 绮
- 汗
- 辉
- 乎
- 破
- 钧
- 芹
- 择
- 胖
- 即
- 呜
- 旅
- 拨
- 紫
- 哇
- 默
- 论
- 朱
- 登
- 脚
- 订
- 秀
- ?
- 社
- 飘
- 尚
- 另
- 骂
- 并
- 恶
- 扫
- 裸
- 姨
- 苹
- 压
- 厌
- 汇
- 爆
- 局
- 睛
- 庄
- 唐
- 嘞
- 偶
- 乔
- 染
- 熟
- 喆
- 愉
- 虎
- 技
- 威
- 布
- 嘴
- 湾
- 术
- 讨
- 尼
- 诶
- 坊
- 删
- 桑
- 庾
- 斗
- 呃
- 仁
- 训
- 汤
- 脱
- 凡
- 例
- 唉
- 畅
- 参
- 晕
- 肥
- 营
- 鲁
- 减
- 琳
- 瑞
- 透
- 素
- 厉
- 追
- 扰
- 控
- 谣
- 足
- 检
- 扬
- 娇
- 耳
- 津
- 倾
- 淡
- 露
- 妞
- 熙
- 值
- 罪
- 浩
- 探
- 盐
- 列
- 券
- 潘
- 官
- 篇
- 纪
- 签
- 棒
- 丑
- 陆
- 养
- 佛
- 唯
- 芮
- 哒
- 榜
- 培
- 疯
- 财
- 卷
- 痴
- 凌
- 瓜
- 猫
- 泡
- 据
- 厦
- 辣
- 恩
- 土
- 补
- 递
- 伏
- 灰
- 糖
- 玛
- 黎
- 湘
- 遥
- 谅
- 桃
- 曼
- 招
- 勇
- 泰
- 杭
- 缓
- 朗
- 替
- 刷
- 封
- 骨
- 盖
- 眠
- 担
- 忽
- 蛮
- 蜗
- 肚
- 喽
- 懒
- 继
- 辈
- 魔
- 哼
- 顶
- 冲
- 番
- 释
- 形
- 页
- 渡
- 触
- 裂
- 逛
- 圆
- 迎
- 态
- 弃
- 洛
- 丰
- 困
- 展
- 束
- 巧
- 临
- 际
- 涛
- 酷
- 洁
- 毕
- 呆
- 励
- 臭
- 暂
- 评
- 沧
- 磨
- 洞
- 厂
- 吵
- 煮
- 旧
- 幽
- 寄
- 政
- 丫
- 闯
- 举
- 误
- 护
- 状
- 寂
- 牙
- 杯
- 议
- 眉
- 享
- 剩
- 秘
- 噢
- 耿
- 致
- 偷
- 丢
- 刀
- 销
- 盒
- 编
- 珍
- 葛
- 译
- 颗
- 括
- 奥
- 鲜
- 沈
- 婷
- 摩
- 炒
- 惯
- 啡
- 混
- 燥
- 扣
- 晶
- 柏
- 拥
- 旭
- 拾
- 验
- 嫁
- 铺
- 棉
- 划
- 虾
- 浙
- 寓
- 剪
- 贴
- 圣
- 颖
- 申
- 枝
- 艾
- 旁
- 溪
- '?'
- 厚
- 驶
- 燃
- 虽
- 途
- 祖
- 职
- 泽
- 腿
- 薇
- 阵
- 移
- 淋
- 灭
- 寞
- 森
- 延
- 孝
- 沥
- 迟
- 伪
- 催
- 投
- 伯
- 谓
- 诚
- 架
- 耶
- 项
- 撒
- 邦
- 善
- 鼻
- 芬
- 闲
- 增
- 卓
- 层
- 鹏
- 敲
- 镖
- 粉
- 欧
- 纸
- 甘
- 昆
- 哩
- 坚
- 苍
- 积
- 筝
- 擦
- 董
- 吻
- 折
- 欺
- 疆
- 勒
- 售
- 船
- 胜
- 甄
- 杂
- 骑
- 贱
- 饼
- 称
- 隆
- 竟
- 逃
- 啷
- 引
- 宾
- 莉
- 境
- 奖
- 救
- 讯
- 恰
- 垃
- 圾
- 宅
- 潜
- 皇
- 符
- 徽
- 造
- 翔
- 粥
- 桌
- 租
- 险
- 驾
- 祭
- 昂
- 牧
- 宣
- 综
- 谷
- 私
- 瓷
- 避
- 肖
- 闪
- 圈
- 喱
- 耀
- 悟
- 秒
- 篮
- 逗
- 蝶
- 趣
- 恨
- 恐
- 饺
- 碎
- 奔
- 幼
- 股
- 锦
- 锡
- 椅
- 玲
- 刑
- 嗓
- 喊
- 虑
- 俺
- 镜
- 耐
- 鹿
- 狄
- 兮
- 返
- 恭
- 含
- 傅
- 沟
- 莹
- 妃
- 忠
- 赤
- 喔
- 抓
- 迈
- 众
- 豪
- 祈
- 馨
- 嬛
- 庭
- 异
- 辰
- 琅
- 荷
- 匪
- 吐
- 警
- 虹
- 吓
- 聪
- 悔
- 归
- 富
- 陕
- 魏
- 欲
- 菊
- 雹
- 隐
- 涯
- 忍
- 芦
- 琊
- 酸
- 逊
- 亦
- 咪
- 瞎
- 滨
- 胸
- 采
- 穹
- 究
- 炊
- 痒
- 莎
- 柳
- 井
- 洪
- 胎
- 鼓
- 润
- 迁
- 玫
- 滩
- 傲
- 袁
- 赚
- 研
- 躺
- 烤
- 莱
- 搬
- 蒋
- 曹
- 孟
- 嫂
- 甲
- 瑰
- 窝
- 令
- 堆
- 废
- 掌
- 巡
- 妙
- 袋
- 争
- 萌
- 挑
- 册
- 饮
- 勋
- 珊
- 戒
- 绵
- 亡
- 劳
- 搭
- 甩
- 匙
- 彭
- 锋
- 钥
- 率
- 吟
- 鼠
- 纱
- 坡
- 潇
- 挣
- 逝
- 针
- 弱
- 妍
- 稳
- 怒
- 塘
- 卢
- 宵
- 悠
- 饱
- 披
- 瘦
- 浮
- 烂
- 壶
- 截
- 勿
- 序
- 委
- 兔
- 塔
- 执
- 墨
- 府
- 宙
- 欠
- 巨
- 帽
- 占
- 顿
- 权
- 坠
- 碰
- 著
- 硬
- 炮
- 骚
- 肃
- 规
- 厕
- 贾
- 葫
- 徒
- 瓶
- 辽
- 耍
- 赢
- 桂
- 浦
- 趟
- 柯
- 悉
- 恼
- 禁
- 殊
- 卧
- 赞
- 益
- 责
- 虚
- 姓
- 愁
- 舅
- 残
- 既
- 拖
- 棍
- 幻
- 库
- 骄
- 烈
- 尊
- 伊
- 缺
- 迹
- 疑
- 汽
- 郎
- 鸭
- 仪
- 盗
- 幺
- 萱
- 胃
- 脏
- 努
- 勉
- 池
- 咳
- 奋
- 批
- 蝴
- 监
- 犯
- 滑
- 牵
- 冯
- 败
- 毒
- 怖
- 绪
- 帐
- 协
- 韵
- 怜
- 薛
- 姚
- 副
- 塞
- 蕉
- 夹
- 萝
- 爹
- 貌
- 奈
- 乞
- 隔
- 澳
- 姥
- 妖
- 腰
- 纳
- 龄
- 材
- 旗
- 萤
- 俗
- 昼
- 坛
- 霍
- 怡
- 丐
- 咒
- 础
- 嘎
- 虫
- 枪
- 遗
- 献
- 陌
- 侣
- 。
- 昧
- 筒
- 袭
- 厨
- 爬
- 茂
- 媛
- 慰
- 填
- 霞
- 娟
- 摸
- 逍
- 赫
- 霾
- 泥
- 暧
- 翅
- 谦
- 夕
- 瑶
- 鑫
- 刺
- 袖
- 拒
- 玄
- 涂
- 溜
- 旬
- 鸣
- 泷
- 距
- 阻
- 绩
- 狠
- 宽
- 狐
- 赖
- 握
- 循
- 靓
- 述
- 糕
- 踏
- 侯
- 劵
- 壮
- 抄
- 苟
- 岗
- 供
- 湛
- 炼
- 烫
- 棋
- 糊
- 饶
- 悄
- 霸
- 竹
- 哀
- 拔
- 蓉
- 旦
- 晰
- 振
- 漠
- 苗
- 帘
- 糟
- 崇
- 踩
- 汕
- 寝
- 刹
- 蔬
- 旺
- 躁
- 守
- 液
- 疗
- 晋
- 坤
- 洒
- 串
- 屏
- 翠
- 鹅
- 腻
- 毅
- 蹈
- 党
- 咩
- 灿
- 哄
- 核
- 横
- 谎
- 忏
- 映
- 倔
- 则
- 肤
- 贺
- 潍
- 焦
- 渐
- 坑
- 瞄
- 融
- 琼
- 尤
- 逸
- 碧
- 葡
- 卜
- 察
- 邢
- 薄
- 亏
- 绒
- 萄
- 婉
- 闺
- 势
- 描
- 均
- 梨
- 椒
- 慕
- 污
- 弯
- 繁
- 炸
- 肿
- 阅
- 肺
- 席
- 呦
- 碟
- 耻
- 端
- 叹
- 庸
- 危
- 痘
- 峡
- 腐
- 霜
- 拳
- 昴
- 荡
- 屎
- 纠
- 夸
- 尿
- 钰
- 撼
- 嗽
- 雯
- 症
- 衡
- 互
- 孔
- 钻
- 萍
- 娄
- 斤
- 悦
- 谊
- 扯
- 驴
- 歉
- 扎
- 庐
- 蒲
- 吼
- 熬
- 鸳
- 蒸
- 驹
- 允
- 射
- 酱
- 鸯
- 企
- 馒
- 乘
- 葱
- 泳
- 莞
- 脆
- 寨
- 损
- 陀
- 膀
- 淮
- 侃
- 霉
- 施
- 橙
- 煲
- 妆
- 审
- 宠
- 穷
- 敌
- 堡
- 樱
- 诞
- 胆
- 彤
- 祷
- 渭
- 霆
- 亭
- 璐
- 邵
- 壁
- 禺
- 墙
- 葬
- 垫
- 吾
- 粒
- 爵
- 弘
- 妻
- 蕾
- 咨
- 固
- 幕
- 粗
- 抢
- 访
- 贸
- 挥
- 饰
- 硕
- 域
- 岸
- 咬
- 晗
- 姆
- 骤
- 抖
- 判
- 鄂
- 获
- 锻
- 郝
- 柜
- 醋
- 桐
- 泣
- 粘
- 革
- 脾
- 尸
- 侧
- 辆
- 埋
- 稻
- 肠
- 嫌
- 彬
- 庚
- 彼
- 龟
- 弥
- 籍
- 纽
- 喷
- 氛
- 币
- 蠢
- 磁
- 袜
- 柴
- 寸
- 韦
- 忐
- 忑
- 恢
- 缩
- 捷
- 绕
- 翼
- 琦
- 玻
- 驻
- 屈
- 岩
- 颂
- 仓
- 茜
- 璃
- 裙
- 僵
- 柿
- 稿
- 巾
- 撑
- 尹
- 嘟
- 牡
- 昏
- 歇
- 诵
- 丸
- 梯
- 挡
- 袄
- 逢
- 徙
- 渴
- 仰
- 跨
- 碗
- 阔
- 税
- 拼
- 宥
- 丞
- 凶
- 析
- 炖
- 舌
- 抗
- 脖
- 甚
- 豚
- 敷
- 瓦
- 织
- 邀
- 浏
- 猛
- 歪
- 阶
- 兽
- 俄
- 鹤
- 禹
- 纹
- 闽
- 惹
- 煤
- 患
- 岭
- 瑜
- 稀
- 拆
- 凄
- 崎
- 芝
- 摊
- 尺
- 彻
- 览
- 贷
- 珂
- 憋
- 径
- 抚
- 魅
- 悬
- 胶
- 倍
- 贯
- 籁
- 乃
- 哑
- 惑
- 撞
- 箫
- 绣
- 扁
- 苑
- 靖
- 漏
- 挤
- 轶
- 叮
- 烨
- 菇
- 砸
- 趁
- 媚
- 仅
- 藤
- 邱
- 陵
- 躲
- 滋
- 叛
- 捉
- 孕
- 铜
- 衫
- 寿
- 寺
- 枫
- 豹
- 伽
- 翡
- 蜂
- 丙
- 姗
- 羡
- 凑
- 鄙
- 庙
- 铭
- 宰
- 廖
- 肩
- 臣
- 抑
- 辅
- 誓
- 扇
- 啪
- 羞
- 诊
- 敦
- 跃
- 俞
- 肝
- 坦
- 贡
- 踢
- 齿
- 尧
- 淀
- 叉
- 浴
- 狮
- 昊
- 蟹
- 捏
- 略
- 禾
- 纲
- 赔
- 憾
- 赋
- 丘
- 尝
- 钓
- 涕
- 猴
- 鸽
- 纵
- 奉
- 涨
- 揍
- 怨
- 挨
- 兜
- 冈
- 凭
- 策
- 裴
- 摔
- 喵
- 佐
- 喉
- 膏
- 瑟
- 抬
- 纷
- 廊
- 贼
- 煎
- 熄
- 渝
- 缠
- 纶
- 岚
- 衬
- 遮
- 翰
- 誉
- 摘
- 勾
- 赣
- 姬
- 娅
- 撤
- 霖
- 泊
- 膝
- 耽
- 犹
- 仍
- 辞
- 溃
- 骏
- 弓
- 膜
- 诱
- 慌
- 惨
- 噪
- 涩
- 潭
- 幂
- 梓
- 植
- 罚
- 扮
- 涮
- 雁
- 兆
- 舟
- 咸
- 犀
- 炉
- 筋
- 陇
- 狸
- 帕
- 噶
- 茄
- 嗒
- 纬
- 障
- 聘
- 盼
- 盟
- 咧
- 灏
- 菠
- 巷
- 帖
- 慈
- 枕
- 唤
- 慨
- 呛
- 叽
- 砖
- 窍
- 瞒
- 龚
- 促
- 尖
- 螺
- 捞
- 盆
- 茫
- 屌
- 械
- 乳
- 啤
- 玺
- 廷
- 谐
- 吖
- 帆
- 蛇
- 琵
- 琶
- 扑
- 跌
- 崩
- 扭
- 扔
- 咿
- 菩
- 茉
- 攻
- 虐
- 甸
- 璇
- 驰
- 瞬
- 鸦
- 厢
- 囊
- 闫
- 届
- 墓
- 芒
- 栗
- 沫
- 违
- 缝
- 棵
- 杏
- 赌
- 灾
- 颤
- 沂
- 肇
- 桶
- 霄
- !
- 咙
- 绥
- 仲
- 愈
- 竖
- 菌
- 捕
- 烘
- 阮
- 皆
- 咚
- 劫
- 揭
- 郸
- 庞
- 喇
- 拐
- 奴
- 咔
- 幅
- 偿
- 咦
- 召
- 薪
- 盯
- 黛
- 杉
- 辨
- 邯
- 枯
- 沃
- 吊
- 筷
- 陷
- 鹰
- 嗦
- 噻
- 屯
- 殇
- 抵
- 雕
- 辩
- 枣
- 捂
- 瘾
- 粮
- 巢
- 耗
- 储
- 殷
- 糯
- 轨
- 沾
- 淇
- 毁
- 沐
- 蚊
- 鉴
- 灌
- 玖
- 唔
- 芙
- 淳
- 昕
- 裹
- 茧
- 浑
- 睿
- 踪
- 邪
- 瘩
- 恺
- 斜
- 汰
- 逐
- 铮
- 毫
- 胞
- 昭
- 妥
- 筑
- 贪
- 蘑
- 皓
- 颐
- 疙
- 捡
- 泛
- 债
- 栎
- 棚
- 腹
- 构
- 蓬
- 宪
- 叭
- 愚
- 押
- 蜀
- 夷
- 娶
- 盾
- 倪
- 牟
- 抛
- 壳
- 衍
- 杆
- 撕
- 亿
- 纤
- 淹
- 翘
- 蔷
- 芊
- 罩
- 拯
- 嗷
- 浇
- 宴
- 遵
- 冥
- 祸
- 塑
- 沛
- 猎
- 携
- 噜
- 喘
- 缴
- 砍
- 唢
- 曦
- 遛
- 罢
- 峨
- 戚
- 稚
- 揉
- 堰
- 螃
- 薯
- 乙
- 矿
- 挽
- 弛
- 埃
- 淅
- 疲
- 窦
- 烛
- 媒
- 尬
- 汀
- 谨
- 罐
- 劣
- 伶
- 煜
- 栏
- 榆
- 矛
- 琐
- 槽
- 驼
- 渤
- 沒
- 泄
- 粑
- 匀
- 囧
- 茵
- 霹
- 澈
- 岑
- 乏
- 栋
- 拌
- 框
- 祁
- 叨
- 斋
- 玥
- 僧
- 疏
- 绳
- 晃
- 抹
- 授
- 蓄
- 檬
- 仇
- 毯
- 啵
- 泼
- 阁
- ','
- 邹
- 阎
- 渠
- 函
- 腊
- 割
- 绑
- 扶
- 肌
- 卑
- 匠
- 雳
- 绯
- 婧
- 煌
- 蒂
- 腔
- 仿
- 遭
- 阜
- 峻
- 劝
- 绎
- 黔
- 贫
- 剁
- 荆
- 樊
- 卸
- 锄
- 阕
- 狱
- 冉
- 鲍
- 荒
- 侄
- 唇
- 忌
- 掖
- 竞
- 匹
- 仗
- 锤
- 穆
- 践
- 冶
- 柱
- 聂
- 捧
- 唠
- 翁
- 掏
- 塌
- 沁
- 巩
- 沸
- 蜡
- 痕
- 削
- 晟
- 眯
- 灶
- 婴
- 啸
- 釜
- 兼
- 剂
- 氧
- 赐
- 铠
- 攀
- 扩
- 朦
- 胧
- 孽
- 挖
- 钞
- 碍
- 凝
- 鼎
- 屉
- 斑
- 抠
- 哗
- 哨
- 婶
- 劈
- 冕
- 霏
- 汾
- 雀
- 浚
- 屠
- 唰
- 疚
- 芽
- 惦
- 裕
- 仑
- 厘
- 烁
- 瞧
- 蚂
- 涿
- 尴
- 埔
- 橘
- 磕
- 苇
- 脂
- 臂
- 蛙
- 镁
- 绽
- 卿
- 荃
- 莺
- 迫
- 敖
- 呈
- 勃
- 碌
- 讶
- 赠
- 巫
- 篱
- 浓
- 攒
- 裁
- 嫣
- 彪
- 娣
- 坟
- 廉
- 聆
- 铉
- 瞌
- 葵
- 鞍
- 坎
- 畜
- 爪
- 锯
- 潼
- 矣
- 闸
- 俱
- 蹭
- 戈
- 扒
- 滤
- 撇
- 浅
- 唧
- 觅
- 婕
- 牢
- 堕
- 丈
- 滕
- 御
- 溢
- 阑
- 楞
- 伺
- 馋
- 禄
- 胳
- 措
- 伐
- 滔
- 沦
- 澎
- 谙
- 桢
- 肾
- 熏
- 炅
- 邻
- 吞
- 噔
- 哔
- 沿
- 竺
- 闵
- 妨
- 啰
- 儒
- 锈
- 虞
- 颠
- 脊
- 膊
- 搓
- 岐
- 浸
- 兹
- 吨
- 垂
- 晏
- 痹
- 哆
- 漆
- 叠
- 莓
- 嘀
- 挫
- 馈
- 愧
- 佟
- 疾
- 蒜
- 盈
- 侬
- 烊
- 炙
- 蜢
- 诡
- 莆
- 蛾
- 轴
- 妒
- 洱
- 擎
- 脉
- 飓
- 泫
- 浆
- 岔
- 蹦
- 愤
- 琛
- 趴
- 绘
- 忻
- 拽
- 牲
- 馅
- 鲨
- 靴
- 鳅
- 俐
- 罕
- 呕
- 凋
- 绫
- 蕊
- 圃
- 猥
- 氓
- 歧
- 秧
- 栈
- 梧
- 衷
- 巅
- 彝
- 嚎
- 菁
- 渔
- 茬
- 汐
- 拓
- 昔
- 囚
- 舜
- 搁
- 泸
- 涟
- 蚁
- 裳
- 鞭
- 辟
- 蝎
- 簧
- 予
- 倦
- 傍
- 荔
- 瞳
- 碑
- 桨
- 疫
- 骁
- 驿
- 柠
- 妾
- 隶
- 菏
- 煽
- 麒
- 奎
- 驯
- 飙
- 姻
- 沅
- 扉
- 斩
- 奢
- 蚌
- 掩
- 蹲
- 丧
- 辱
- 焉
- 佘
- 襄
- 芯
- 枉
- 谋
- 渊
- 哮
- 喀
- 朔
- 侏
- 姝
- 戎
- 磅
- 督
- 诛
- 奸
- 苞
- 庵
- 馄
- 聋
- 滁
- 垚
- 柬
- 猩
- 夺
- 啼
- 坝
- 竭
- 黏
- 衰
- 遂
- 潞
- 谜
- 蜻
- 蜓
- 瓣
- 秉
- 檐
- 楂
- 嗑
- 搅
- 嘚
- 倚
- 乒
- 宛
- 崽
- 恕
- 轰
- 淄
- 晞
- 酬
- 砂
- 筠
- 薰
- 蒿
- 瞅
- 勺
- 阙
- 伸
- 嚏
- 湄
- 咆
- 坂
- 役
- 掰
- 渣
- 魁
- 诅
- 浒
- 妓
- 珑
- 捎
- 焊
- 饲
- 脍
- 荫
- 堤
- 轿
- 乓
- 筹
- 撸
- 饨
- 渺
- 桓
- 旷
- 笙
- 晖
- 慎
- 埠
- 挪
- 汝
- 浊
- 仨
- 鳄
- 濮
- 汶
- 邰
- 钉
- 蔽
- 亨
- 屑
- 铅
- 喃
- 葩
- 哉
- 睁
- 骆
- 涉
- 汁
- 拦
- 痞
- 芜
- 俪
- 兑
- 梵
- 刊
- 缅
- 彰
- 俑
- 桔
- 堪
- 鸥
- 契
- 覆
- 拷
- 珞
- 诸
- 棱
- 忒
- 嫩
- 梶
- 贻
- 藕
- 愣
- 湃
- 趋
- 甭
- 嗖
- 怯
- 憧
- 珀
- 缸
- 蔓
- 稣
- 筱
- 杠
- 崖
- 凳
- 裆
- 隧
- 锣
- 嘣
- 瀑
- 漪
- 柄
- 凸
- 颁
- 迦
- 烙
- 岱
- 瑄
- 吭
- 肆
- 鳞
- 晾
- 憬
- 邑
- 甥
- 掀
- 褂
- 淫
- 瓢
- 暮
- 喧
- 祛
- 恙
- 禅
- 柚
- 樟
- 疮
- 嗡
- 懈
- 茨
- 矮
- 诠
- 侮
- 眨
- 羲
- 掐
- 琉
- 雍
- 晔
- 凹
- 怂
- 禧
- 蹬
- 绅
- 榄
- 箍
- 詹
- 溶
- 黯
- 啃
- 驸
- 朕
- 婺
- 援
- 铲
- 呻
- 犬
- 捣
- 眷
- 剃
- 惧
- 芷
- 叱
- 娥
- 钦
- 矫
- 憨
- 骊
- 坪
- 俏
- 炳
- 妲
- 冀
- 刁
- 馍
- 琢
- 扛
- 瞿
- 辙
- 茅
- 寡
- 絮
- 呷
- 哺
- 咕
- 驱
- 搂
- 圭
- 嫉
- 涓
- 茱
- '"'
- 笼
- 讽
- 涡
- 泓
- 弊
- 诀
- 璧
- 舔
- 嬅
- 亢
- 沪
- 绢
- 钙
- 喏
- 馥
- 怅
- 簿
- 薜
- 捶
- 冤
- 脐
- 岂
- 溺
- 蕙
- 铿
- 锵
- 锐
- 呸
- 砰
- 亩
- 漳
- 阪
- 栀
- 坞
- 跤
- 蓓
- 舰
- 缕
- 羁
- 芋
- 畔
- 衔
- 铝
- 盲
- 株
- 搏
- 曙
- 惩
- 逻
- 蹄
- 涤
- 宕
- 咤
- 尉
- 嘘
- 瀚
- 仃
- 稽
- 霑
- 飕
- 垮
- 酿
- 畏
- 鲸
- 梗
- 署
- 砒
- 雏
- 茗
- 恬
- 螂
- 拂
- 憔
- 悴
- 钗
- 棕
- 劭
- 歹
- 笠
- 厄
- 焖
- 拣
- 逮
- 蕴
- 淌
- 枸
- 杞
- 雇
- 漯
- 邂
- 逅
- ·
- 荟
- 塾
- 涌
- 挚
- 舱
- 惬
- 剖
- 榴
- 侦
- 摁
- 烹
- 烽
- 俘
- 麓
- 犊
- 酌
- 匿
- 梭
- 覃
- 隽
- 惆
- 掠
- 舵
- 艰
- 蟑
- 瘤
- 仆
- 穴
- 涅
- 衿
- 嚷
- 峪
- 榕
- 吒
- 酪
- 曝
- 帧
- 靶
- 嚣
- 踝
- 翊
- 陂
- 髓
- 瑚
- 裘
- 芍
- 炬
- 鲅
- 蚕
- 肢
- 颊
- 陛
- 籽
- 粟
- 滞
- 煞
- 乾
- 媞
- 刨
- 碾
- 瘫
- 盔
- 侈
- 徘
- 徊
- 熔
- 吆
- 褪
- 拟
- 廓
- 翟
- 俾
- 沽
- 垒
- 萎
- 僻
- 豌
- 卵
- 狡
- 篓
- 栽
- 崴
- 拧
- 颈
- 咐
- 胭
- 阱
- 鄱
- 漓
- 厥
- 烬
- 糙
- 褥
- 炕
- 恍
- 襟
- 韧
- 眸
- 毙
- 垢
- 叙
- 辜
- 酝
- 璋
- 荧
- 魇
- 皈
- 觞
- 喻
- 孺
- 匈
- 铛
- 诈
- 盏
- 淼
- 佣
- 苓
- 缚
- 洼
- 疡
- 猬
- 腑
- 阡
- 鲫
- 鹭
- 鹂
- 笆
- 埙
- 癌
- 璀
- 璨
- 疹
- 蓑
- 芭
- 嘶
- 桀
- 吩
- 泾
- 铂
- 倘
- 囗
- 璜
- 窃
- 癫
- 璞
- 墟
- 钩
- 粹
- 镐
- 韬
- 牺
- 寮
- 喳
- 鄞
- 笋
- 臧
- 疤
- 捐
- 腥
- 嬷
- 燮
- 濠
- 棠
- 夙
- 弑
- 乍
- 剔
- 嘈
- 钇
- 衅
- 挝
- 橡
- 矜
- 圩
- 恳
- 瑛
- 蔺
- 兖
- 焕
- 懿
- 钏
- 栾
- 筐
- 苒
- 碳
- 韭
- 箭
- 婵
- 迭
- 枷
- 孜
- 咽
- 悯
- 漉
- 噬
- 侍
- 蝉
- 涧
- 鹦
- 鹉
- 冼
- 竿
- …
- 袈
- 诏
- 锢
- 泠
- 匡
- 枚
- 坷
- 邝
- 癖
- 绷
- 皖
- 滦
- 滥
- 荨
- 虏
- 拈
- 浜
- 颓
- “
- ”
- 戳
- 钮
- 梳
- 溅
- 徨
- 旨
- 罂
- 蹉
- 腌
- 隙
- 侨
- 槟
- 泌
- 珈
- 芵
- 腮
- 晤
- 墩
- 鲤
- 扳
- 栓
- 窑
- 荏
- 饪
- 泵
- 猿
- 眀
- 嗝
- 禽
- 朽
- 偕
- 胀
- 谍
- 捅
- 蜉
- 蝣
- 蹋
- 拱
- 氯
- 噼
- 蚩
- 芥
- 蛟
- 貂
- 荚
- 痰
- 殿
- 遣
- 丛
- 碱
- 殖
- 炽
- 嚓
- 彗
- 窟
- 鳌
- 矶
- 镯
- 乜
- 髙
- 蛤
- 荤
- 坨
- 漱
- 惰
- 跎
- 萸
- 曰
- 亘
- 窘
- 厮
- 绐
- 黝
- 鞠
- 漩
- 蚱
- 垣
- 翩
- 嬴
- 彷
- 椰
- 砚
- 褐
- 黍
- 噗
- 耕
- 挠
- 妩
- 掂
- 峯
- 灸
- 晌
- 溧
- 鹃
- 屿
- 昙
- 廾
- 冢
- 龌
- 龊
- 瞪
- 刽
- 脓
- 壹
- 羱
- 奠
- 贰
- 佬
- 拙
- 颢
- 嘱
- 糗
- 昀
- 巳
- 辕
- 惫
- 黒
- 辐
- 窈
- 窕
- 拢
- 缪
- 逞
- 吝
- 裟
- 钝
- 寇
- 耙
- 隋
- 蝇
- 仟
- 铨
- 赊
- 皑
- 衢
- 胚
- 腺
- 啧
- 淤
- 妄
- 氢
- 寅
- 叻
- 嘲
- 叼
- 沮
- 磐
- 芈
- 饥
- 槿
- 卤
- 懵
- 惴
- 毋
- 箩
- 苔
- 峥
- 斥
- 矬
- 佚
- 肮
- 皎
- 憎
- 樨
- 讴
- 鳖
- 煦
- 焚
- 泗
- 皂
- 礁
- 睬
- 梢
- 妤
- 佗
- 蝌
- 蚪
- 渗
- 暇
- 卟
- 悼
- 瑨
- 伎
- 纺
- 耆
- 舶
- 礴
- 豺
- 涪
- 谬
- 赴
- 婪
- 吱
- 麽
- 犁
- 潸
- 鸪
- 鸢
- 鄯
- 讷
- 弶
- 橄
- 撬
- 赦
- 岷
- 垓
- 绞
- 虔
- 剥
- 澜
- 酗
- 谛
- 骥
- 撅
- 鱿
- 犷
- 讪
- 秃
- 卞
- 缆
- 蓦
- 庶
- 勐
- 笫
- 敛
- 弗
- 痱
- 啬
- 硚
- 昱
- 忿
- 撩
- 椿
- 侵
- 窄
- 邛
- 崃
- 涸
- 赈
- 狭
- 嵌
- 淖
- 瑙
- 踹
- 傈
- 僳
- 缭
- 睦
- 窜
- 嘅
- 樵
- 爰
- 侗
- 逑
- 弧
- 侑
- :
- 娉
- 蝙
- 蝠
- 骅
- 饴
- 揣
- /
- 鲈
- 綦
- 拴
- 硝
- 梆
- 馗
- 夭
- 扼
- 鳃
- 惚
- 扈
- 矢
- 藁
- 飚
- 妊
- 踮
- 惟
- 痊
- 艇
- 偎
- 魄
- 篝
- 簸
- 擞
- 粽
- 缥
- 缈
- 跷
- 咁
- 悍
- 菀
- 陡
- 橱
- 遐
- 榨
- 渎
- 蹂
- 躏
- 舂
- 轼
- 枰
- 焰
- 幌
- 邸
- 捜
- 灼
- 茯
- 芎
- 穗
- 棘
- 碜
- 颉
- 鹧
- 啄
- 趾
- 茎
- 揽
- 靳
- 黜
- 惋
- 亥
- 铡
- 栅
- 挞
- 眈
- 膘
- 犍
- 珉
- 镪
- 昵
- 霓
- 圪
- 汲
- 惺
- 瑕
- 桩
- 洽
- 唏
- 耒
- 唻
- 豁
- 郓
- 纣
- 亊
- 鳝
- 蟆
- 癣
- 碚
- 踌
- 殁
- 缉
- 痔
- 頔
- 蔫
- ;
- 掺
- 愫
- 祟
- 拘
- 蜘
- 蛛
- 涎
- 耸
- 揪
- 芪
- 腕
- 袍
- 慵
- 绻
- 绛
- 螨
- 捌
- 墅
- 篷
- 啾
- 孪
- 唬
- 褛
- 跶
- 壤
- 慷
- 痧
- 懦
- 郯
- 莴
- 茴
- 嘬
- 铎
- 辫
- 绚
- 簇
- 墘
- 婿
- 咻
- 斡
- 沱
- 譬
- 羔
- 藓
- 肋
- 棂
- 赎
- 炭
- 徵
- 簌
- 艘
- 苪
- 眶
- 嘭
- 霎
- 馊
- 秽
- 仕
- 镶
- 纨
- 摧
- 蒨
- 闰
- 迩
- 篙
- 嚯
- 郫
- 陋
- 殒
- 邃
- 浔
- 瑾
- 鳟
- 祯
- 泻
- 氟
- 猾
- 酥
- 萦
- 郴
- 祀
- 涼
- 屡
- 摹
- 毡
- 妪
- 郡
- 柘
- 裱
- 囔
- 楷
- 鄄
- 蕲
- 偲
- 菘
- 姣
- 瞥
- 肪
- 饽
- 惭
- 胁
- 垄
- 榻
- 讼
- 旱
- 鬓
- 凇
- 钊
- 掣
- 浣
- 凃
- 蓥
- 臊
- 夔
- 脯
- 苛
- 阀
- 睫
- 腋
- 姊
- 躬
- 瘁
- 奄
- 靡
- 盂
- 柑
- 渑
- 恻
- 缱
- 拎
- 恤
- 缶
- 嵬
- 簋
- 囤
- 褴
- 蔼
- 沌
- 薏
- 鸵
- 跋
- 篪
- 罡
- 颇
- 嗄
- 胺
- 烯
- 酚
- 祠
- 迢
- 硖
- 眺
- 珏
- 怆
- 斧
- 痪
- 祺
- 嘤
- 谑
- 婊
- 滂
- 骇
- 帔
- 荼
- 硅
- 猖
- 皱
- 顽
- 榔
- 锌
- 蔻
- 滢
- 茸
- 捋
- 壥
- 孰
- 娩
- 锥
- 逾
- 诬
- 娠
- 厝
- 噎
- 秤
- 祢
- 嗳
- 嗜
- 滘
- 尅
- 悚
- 履
- 馕
- 簪
- 俭
- 摞
- 妗
- 蛎
- 暹
- 钾
- 膨
- 孚
- 驷
- 卯
- 猇
- 褚
- 町
- 骞
- -
- 芩
- 赁
- 粱
- 隼
- 掘
- 莽
- 郾
- 擒
- 叁
- 敕
- 镊
- 惘
- 蚤
- 邳
- 嗫
- 扪
- 瀛
- 凿
- 雎
- 啲
- 鲲
- 帼
- 枭
- 羹
- 驳
- 铆
- 肴
- 嫦
- 媲
- 鹳
- 秩
- 銮
- 饯
- 毽
- 珩
- 眩
- 仄
- 葳
- 撮
- 睇
- 塄
- 肘
- 钠
- 诓
- 呱
- 垅
- 菱
- 亍
- 戍
- 酯
- 袱
- 隘
- 蓟
- 暨
- 痣
- 辗
- 埵
- 殉
- 郏
- 孢
- 悳
- 讫
- 诲
- 髋
- 孑
- 睹
- 擅
- 嗮
- 慒
- 琰
- 濛
- 雌
- 恁
- 擀
- 娼
- 谕
- 撵
- 苯
- 聴
- 唛
- 撂
- 栖
- 拗
- 孬
- 怏
- 掇
- 肽
- 胰
- 沣
- 卅
- 箅
- 氨
- 浠
- 蠡
- 募
- 肛
- 岀
- 瞑
- 蛆
- 舀
- 蚝
- 歙
- 涔
- 诘
- 、
- 垡
- 涠
- 嘢
- 糸
- 胤
- 绊
- 柒
- 沓
- 粼
- 菖
- 犒
- 呒
- 唑
- 莘
- 莪
- 宸
- 睨
- \
- 鲶
- 蛐
- 溏
- 菈
- 蹩
- 焙
- 釆
- 瑗
- 睾
- 槐
- 榉
- 杷
- 鄢
- 僕
- 诽
- 嗲
- 蜃
- 戆
- 蘼
- 糜
- 霁
- 坻
- 硼
- 槛
- 枞
- 麸
- 谒
- 荀
- 邋
- 遢
- 锴
- 啶
- 粪
- 驭
- 筵
- 砌
- 莩
- 蹼
- 吔
- 缳
- 埭
- 隗
- 厶
- 丶
- "\x14"
- "\x17"
- 稼
- 铖
- 涣
- 亳
- 幢
- 沭
- 驮
- 奚
- 藐
- 颅
- 埤
- 愘
- 镲
- 窒
- 暄
- 诃
- 噘
- 歼
- 隅
- 爻
- 蘅
- 锹
- 锇
- 椎
- 琨
- 烩
- 枢
- 觧
- 萁
- 镂
- 龈
- 怠
- 阐
- 藉
- 凛
- 冽
- 珣
- 泘
- 抉
- 锭
- 蕃
- 蠃
- 毓
- 啐
- 栩
- 骷
- 髅
- 耷
- 寥
- 杵
- 蚬
- 窖
- 孛
- 舆
- 皿
- 柸
- 粳
- 钣
- 趸
- 叄
- 腚
- 杖
- 鸸
- 犲
- 浗
- 缮
- 哓
- 箧
- 攘
- 冇
- 钛
- 郗
- 囡
- 酆
- 姌
- 雉
- 胯
- 椭
- 埏
- 钵
- 绌
- 蝾
- 坼
- 濂
- w
- o
- r
- d
- 袒
- 峦
- 鹫
- 炯
- 悱
- 漕
- 莦
- 蔑
- 樽
- 牒
- 濡
- 嫯
- 陖
- 疸
- 桅
- 辖
- 僢
- 《
- 》
- 酣
- 遨
- 邬
- ':'
- 嫲
- 哌
- 锚
- 淙
- Q
- 濑
- 熨
- 谴
- 筛
- 薹
- 磬
- 熠
- 腓
- 阉
- 钴
- 恂
- 溉
- 陨
- 螳
- 孵
- 瘠
- 嫡
- 哝
- 狙
- 怼
- 斟
- 甫
- 渌
- 卒
- 翕
- 沏
- 旮
- 旯
- 菡
- 變
- 狈
- 鳜
- 嵋
- 仞
- 鳕
- 噩
- 踟
- 躇
- 蛀
- 瘸
- 篡
- 锊
- 団
- 斐
- 蹍
- 冗
- "\uFEFF"
- 歆
- 圴
- 泯
- 伥
- 愎
- 坌
- 碘
- 赉
- 骧
- 矩
- 綽
- 秭
- 怵
- 麝
- 贩
- 溥
- 捆
- 腩
- 溴
- 卉
- 痦
- 荻
- 缇
- 秸
- 秆
- 捍
- 炀
- 阆
- 泞
- 懊
- 啕
- 蚶
- 衩
- 桜
- 旖
- 贬
- 酵
- 滟
- 纥
- 倭
- 赝
- 呶
- 哧
- 煸
- 劢
- 炝
- 僚
- 豇
- 阂
- 涝
- 骡
- 霭
- 窨
- 殴
- 竣
- 醇
- 擂
- 怦
- 怩
- 臾
- 搔
- 伱
- 啉
- 嫖
- 囝
- 糠
- 胥
- 酰
- 镫
- 蟒
- 荞
- 醪
- 颦
- 吏
- 颛
- 赳
- 贿
- 赂
- 痩
- 仂
- 颍
- 罔
- 猕
- 嚒
- 蘸
- 熹
- 捺
- 坜
- 郜
- 鉄
- 蒌
- 荑
- 藻
- 谌
- 钳
- 屮
- 疵
- 哞
- 琮
- 潴
- 讹
- 镭
- '3'
- 尕
- 倬
- 庇
- 侩
- 瘆
- 傀
- 儡
- 诧
- 葆
- 唾
- 皋
- 逄
- 诌
- 氦
- 彳
- 盅
- 曳
- 槲
- 挟
- 怿
- 顷
- 臃
- 衙
- 踵
- 霈
- 嗪
- 闩
- 锟
- 恿
- 抻
- 茁
- 惢
- 菅
- 迂
- 瞟
- 痉
- 挛
- 绦
- 晁
- 挢
- 蠕
- 洙
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: '202207'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
jgiral95/q-Taxi-v3 | jgiral95 | 2022-09-21T23:03:18Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-09-21T23:03:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jgiral95/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average | teven | 2022-09-21T22:56:28Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T22:56:21Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average')
model = AutoModel.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
facebook/spar-wiki-bm25-lexmodel-context-encoder | facebook | 2022-09-21T22:46:34Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2110.06918",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T21:39:14Z | ---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the context encoder of the Wiki BM25 Lexical Model (Λ) from the SPAR paper:
[Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?](https://arxiv.org/abs/2110.06918)
<br>
Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta and Wen-tau Yih
<br>
**Meta AI**
The associated github repo is available here: https://github.com/facebookresearch/dpr-scale/tree/main/spar
This model is a BERT-base sized dense retriever trained on Wikipedia articles to imitate the behavior of BM25.
The following models are also available:
Pretrained Model | Corpus | Teacher | Architecture | Query Encoder Path | Context Encoder Path
|---|---|---|---|---|---
Wiki BM25 Λ | Wikipedia | BM25 | BERT-base | facebook/spar-wiki-bm25-lexmodel-query-encoder | facebook/spar-wiki-bm25-lexmodel-context-encoder
PAQ BM25 Λ | PAQ | BM25 | BERT-base | facebook/spar-paq-bm25-lexmodel-query-encoder | facebook/spar-paq-bm25-lexmodel-context-encoder
MARCO BM25 Λ | MS MARCO | BM25 | BERT-base | facebook/spar-marco-bm25-lexmodel-query-encoder | facebook/spar-marco-bm25-lexmodel-context-encoder
MARCO UniCOIL Λ | MS MARCO | UniCOIL | BERT-base | facebook/spar-marco-unicoil-lexmodel-query-encoder | facebook/spar-marco-unicoil-lexmodel-context-encoder
# Using the Lexical Model (Λ) Alone
This model should be used together with the associated query encoder, similar to the [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr) model.
```
import torch
from transformers import AutoTokenizer, AutoModel
# The tokenizer is the same for the query and context encoder
tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
query_input = tokenizer(query, return_tensors='pt')
ctx_input = tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
# Compute embeddings: take the last-layer hidden state of the [CLS] token
query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :]
ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Compute similarity scores using dot product
score1 = query_emb @ ctx_emb[0] # 341.3268
score2 = query_emb @ ctx_emb[1] # 340.1626
```
# Using the Lexical Model (Λ) with a Base Dense Retriever as in SPAR
As Λ learns lexical matching from a sparse teacher retriever, it can be used in combination with a standard dense retriever (e.g. [DPR](https://huggingface.co/docs/transformers/v4.22.1/en/model_doc/dpr#dpr), [Contriever](https://huggingface.co/facebook/contriever-msmarco)) to build a dense retriever that excels at both lexical and semantic matching.
In the following example, we show how to build the SPAR-Wiki model for Open-Domain Question Answering by concatenating the embeddings of DPR and the Wiki BM25 Λ.
```
import torch
from transformers import AutoTokenizer, AutoModel
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
# DPR model
dpr_ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base")
dpr_query_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base")
dpr_query_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base")
# Wiki BM25 Λ model
lexmodel_tokenizer = AutoTokenizer.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_query_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-query-encoder')
lexmodel_context_encoder = AutoModel.from_pretrained('facebook/spar-wiki-bm25-lexmodel-context-encoder')
query = "Where was Marie Curie born?"
contexts = [
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Compute DPR embeddings
dpr_query_input = dpr_query_tokenizer(query, return_tensors='pt')['input_ids']
dpr_query_emb = dpr_query_encoder(dpr_query_input).pooler_output
dpr_ctx_input = dpr_ctx_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
dpr_ctx_emb = dpr_ctx_encoder(**dpr_ctx_input).pooler_output
# Compute Λ embeddings
lexmodel_query_input = lexmodel_tokenizer(query, return_tensors='pt')
lexmodel_query_emb = lexmodel_query_encoder(**query_input).last_hidden_state[:, 0, :]
lexmodel_ctx_input = lexmodel_tokenizer(contexts, padding=True, truncation=True, return_tensors='pt')
lexmodel_ctx_emb = lexmodel_context_encoder(**ctx_input).last_hidden_state[:, 0, :]
# Form SPAR embeddings via concatenation
# The concatenation weight is only applied to query embeddings
# Refer to the SPAR paper for details
concat_weight = 0.7
spar_query_emb = torch.cat(
[dpr_query_emb, concat_weight * lexmodel_query_emb],
dim=-1,
)
spar_ctx_emb = torch.cat(
[dpr_ctx_emb, lexmodel_ctx_emb],
dim=-1,
)
# Compute similarity scores
score1 = spar_query_emb @ spar_ctx_emb[0] # 317.6931
score2 = spar_query_emb @ spar_ctx_emb[1] # 314.6144
```
|
teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_metric_average | teven | 2022-09-21T22:44:31Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T22:44:25Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_metric_average
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_metric_average')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_metric_average')
model = AutoModel.from_pretrained('teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_metric_average')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all_bs320_vanilla_finetuned_WebNLG2020_metric_average)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
sd-concepts-library/karan-gloomy | sd-concepts-library | 2022-09-21T22:42:56Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T22:42:50Z | ---
license: mit
---
### Karan Gloomy on Stable Diffusion
This is the `<karan>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






















|
teven/bi_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average | teven | 2022-09-21T22:38:53Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T22:38:46Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# teven/bi_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/bi_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/bi_all-mpnet-base-v2_finetuned_WebNLG2020_metric_average)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 27 with parameters:
```
{'batch_size': 96, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 0,
"evaluator": "better_cross_encoder.PearsonCorrelationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0005
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 135,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
teven/bi_all_bs160_allneg_finetuned_WebNLG2020_metric_average | teven | 2022-09-21T22:37:45Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T22:37:38Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# teven/bi_all_bs160_allneg_finetuned_WebNLG2020_metric_average
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/bi_all_bs160_allneg_finetuned_WebNLG2020_metric_average')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/bi_all_bs160_allneg_finetuned_WebNLG2020_metric_average)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 161 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 0,
"evaluator": "better_cross_encoder.PearsonCorrelationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 805,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
nvidia/nemo-megatron-gpt-20B | nvidia | 2022-09-21T22:32:20Z | 16 | 32 | nemo | [
"nemo",
"text generation",
"pytorch",
"causal-lm",
"en",
"dataset:the_pile",
"arxiv:1909.08053",
"arxiv:2101.00027",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-09-15T00:51:22Z | ---
language:
- en
library_name: nemo
datasets:
- the_pile
tags:
- text generation
- pytorch
- causal-lm
license: cc-by-4.0
---
# NeMo Megatron-GPT 20B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
Megatron-GPT 20B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 20B refers to the total trainable parameter count (20 Billion) [1, 2].
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
## Getting started
Note: You will need NVIDIA Ampere or Hopper GPUs to work with this model.
### Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
```
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
```
```
pip install nemo_toolkit['nlp']==1.11.0
```
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed.
### Step 2: Launch eval server
**Note.** The example below launches a model variant with Tensor Parallelism (TP) of 4 and Pipeline Parallelism (PP) of 1 on 4 GPUs.
```
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout v1.11.0
python megatron_gpt_eval.py gpt_model_file=nemo_gpt20B_bf16_tp4.nemo server=True tensor_model_parallel_size=4 trainer.devices=4
```
### Step 3: Send prompts to your model!
```python
import json
import requests
port_num = 5555
headers = {"Content-Type": "application/json"}
def request_data(data):
resp = requests.put('http://localhost:{}/generate'.format(port_num),
data=json.dumps(data),
headers=headers)
sentences = resp.json()['sentences']
return sentences
data = {
"sentences": ["Tell me an interesting fact about space travel."]*1,
"tokens_to_generate": 50,
"temperature": 1.0,
"add_BOS": True,
"top_k": 0,
"top_p": 0.9,
"greedy": False,
"all_probs": False,
"repetition_penalty": 1.2,
"min_tokens_to_generate": 2,
}
sentences = request_data(data)
print(sentences)
```
## Training Data
The model was trained on ["The Piles" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
## Evaluation results
*Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
| ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
| ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- |
| 0.4403 | 0.6141 | 0.5188 | 0.4277 | 0.659 | 0.5704 | 0.6954 | 0.721 | 0.7688 |
## Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
## References
[1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
tdobrxl/ClinicBERT | tdobrxl | 2022-09-21T22:27:34Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-07-27T16:18:35Z | ClinicBERT has the same architecture of RoBERTa model. It has been trained on clinical text and can be used for feature extraction from textual data.
## How to use
### Feature Extraction
```
from transformers import RobertaModel, RobertaTokenizer
model = RobertaModel.from_pretrained("tdobrxl/ClinicBERT")
tokenizer = RobertaTokenizer.from_pretrained("tdobrxl/ClinicBERT")
text = "Randomized Study of Shark Cartilage in Patients With Breast Cancer."
last_hidden_state, pooler_output = model(tokenizer.encode(text, return_tensors="pt")).last_hidden_state, model(tokenizer.encode(text, return_tensors="pt")).pooler_output
```
### Masked Word Prediction
```
from transformers import pipeline
fill_mask = pipeline("fill-mask", model="tdobrxl/ClinicBERT", tokenizer="tdobrxl/ClinicBERT")
text = "this is the start of a beautiful <mask>."
fill_mask(text)
```
```[{'score': 0.26558592915534973, 'token': 363, 'token_str': ' study', 'sequence': 'this is the start of a beautiful study.'}, {'score': 0.06330082565546036, 'token': 2010, 'token_str': ' procedure', 'sequence': 'this is the start of a beautiful procedure.'}, {'score': 0.04393036663532257, 'token': 661, 'token_str': ' trial', 'sequence': 'this is the start of a beautiful trial.'}, {'score': 0.0363750196993351, 'token': 839, 'token_str': ' period', 'sequence': 'this is the start of a beautiful period.'}, {'score': 0.027248281985521317, 'token': 436, 'token_str': ' treatment', 'sequence': 'this is the start of a beautiful treatment.'}``` |
monakth/distillbert-base-uncased-fine-tuned-squad | monakth | 2022-09-21T22:01:02Z | 123 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-09-18T15:48:06Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2269 | 1.0 | 5533 | 1.1705 |
| 0.9725 | 2.0 | 11066 | 1.1238 |
| 0.768 | 3.0 | 16599 | 1.1568 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
omarques/autotrain-dogs-and-cats-1527055142 | omarques | 2022-09-21T21:38:24Z | 267 | 2 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:omarques/autotrain-data-dogs-and-cats",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-09-21T21:37:41Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- omarques/autotrain-data-dogs-and-cats
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.8187420113922029
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1527055142
- CO2 Emissions (in grams): 0.8187
## Validation Metrics
- Loss: 0.068
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
sd-concepts-library/midjourney-style | sd-concepts-library | 2022-09-21T21:17:45Z | 0 | 152 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T21:17:31Z | ---
license: mit
---
### Midjourney style on Stable Diffusion
This is the `<midjourney-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
research-backup/roberta-large-semeval2012-average-prompt-e-nce-classification | research-backup | 2022-09-21T20:57:42Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T20:26:28Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-prompt-e-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.75625
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5213903743315508
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5222551928783383
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6292384658143413
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.768
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4649122807017544
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5277777777777778
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9121591080307367
- name: F1 (macro)
type: f1_macro
value: 0.9078493464517976
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8328638497652581
- name: F1 (macro)
type: f1_macro
value: 0.643974348342842
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.652762730227519
- name: F1 (macro)
type: f1_macro
value: 0.6418800744019266
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9641093413090353
- name: F1 (macro)
type: f1_macro
value: 0.889375508685358
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8827953619554998
- name: F1 (macro)
type: f1_macro
value: 0.8807348541974301
---
# relbert/roberta-large-semeval2012-average-prompt-e-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5213903743315508
- Accuracy on SAT: 0.5222551928783383
- Accuracy on BATS: 0.6292384658143413
- Accuracy on U2: 0.4649122807017544
- Accuracy on U4: 0.5277777777777778
- Accuracy on Google: 0.768
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9121591080307367
- Micro F1 score on CogALexV: 0.8328638497652581
- Micro F1 score on EVALution: 0.652762730227519
- Micro F1 score on K&H+N: 0.9641093413090353
- Micro F1 score on ROOT09: 0.8827953619554998
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.75625
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-e-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 30
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
blmnk/distilbert-base-uncased-finetuned-emotion | blmnk | 2022-09-21T20:46:31Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-09-21T20:19:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.896
- name: F1
type: f1
value: 0.8927988574486181
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3821
- Accuracy: 0.896
- F1: 0.8928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.6029 | 0.7985 | 0.7597 |
| 0.7905 | 2.0 | 250 | 0.3821 | 0.896 | 0.8928 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sd-concepts-library/outfit-items | sd-concepts-library | 2022-09-21T19:52:18Z | 0 | 2 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T19:52:12Z | ---
license: mit
---
### Outfit Items on Stable Diffusion
This is the `<outfit-items>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
pritamdeka/S-BioBert-snli-multinli-stsb | pritamdeka | 2022-09-21T18:59:33Z | 2,681 | 5 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# S-BioBert-snli-multinli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-BioBert-snli-multinli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-BioBert-snli-multinli-stsb')
model = AutoModel.from_pretrained('pritamdeka/S-BioBert-snli-multinli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2021unsupervised,
title={Unsupervised Keyword Combination Query Generation from Online Health Related Content for Evidence-Based Fact Checking},
author={Deka, Pritam and Jurek-Loughrey, Anna},
booktitle={The 23rd International Conference on Information Integration and Web Intelligence},
pages={267--277},
year={2021}
}
``` |
pritamdeka/S-Scibert-snli-multinli-stsb | pritamdeka | 2022-09-21T18:59:09Z | 5,987 | 4 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/S-Scibert-snli-multinli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-Scibert-snli-multinli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-Scibert-snli-multinli-stsb')
model = AutoModel.from_pretrained('pritamdeka/S-Scibert-snli-multinli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2021unsupervised,
title={Unsupervised Keyword Combination Query Generation from Online Health Related Content for Evidence-Based Fact Checking},
author={Deka, Pritam and Jurek-Loughrey, Anna},
booktitle={The 23rd International Conference on Information Integration and Web Intelligence},
pages={267--277},
year={2021}
}
``` |
pritamdeka/S-Bluebert-snli-multinli-stsb | pritamdeka | 2022-09-21T18:58:03Z | 702 | 7 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# pritamdeka/S-Bluebert-snli-multinli-stsb
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('pritamdeka/S-Bluebert-snli-multinli-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-Bluebert-snli-multinli-stsb')
model = AutoModel.from_pretrained('pritamdeka/S-Bluebert-snli-multinli-stsb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 90 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
If you use the model kindly cite the following work
```
@inproceedings{deka2021unsupervised,
title={Unsupervised Keyword Combination Query Generation from Online Health Related Content for Evidence-Based Fact Checking},
author={Deka, Pritam and Jurek-Loughrey, Anna},
booktitle={The 23rd International Conference on Information Integration and Web Intelligence},
pages={267--277},
year={2021}
}
``` |
sd-concepts-library/wildkat | sd-concepts-library | 2022-09-21T18:56:20Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T18:56:13Z | ---
license: mit
---
### Wildkat on Stable Diffusion
This is the `<wildkat>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:









|
research-backup/roberta-large-semeval2012-average-prompt-a-nce-classification | research-backup | 2022-09-21T18:41:55Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T18:03:50Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-prompt-a-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.789047619047619
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3342245989304813
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33827893175074186
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3885491939966648
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.542
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3201754385964912
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33564814814814814
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8865451258098539
- name: F1 (macro)
type: f1_macro
value: 0.8770785182418419
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8401408450704225
- name: F1 (macro)
type: f1_macro
value: 0.6242491296371133
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6749729144095341
- name: F1 (macro)
type: f1_macro
value: 0.6505812342477592
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9607706753842944
- name: F1 (macro)
type: f1_macro
value: 0.8781957733610742
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8994045753682232
- name: F1 (macro)
type: f1_macro
value: 0.8968786782259857
---
# relbert/roberta-large-semeval2012-average-prompt-a-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3342245989304813
- Accuracy on SAT: 0.33827893175074186
- Accuracy on BATS: 0.3885491939966648
- Accuracy on U2: 0.3201754385964912
- Accuracy on U4: 0.33564814814814814
- Accuracy on Google: 0.542
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8865451258098539
- Micro F1 score on CogALexV: 0.8401408450704225
- Micro F1 score on EVALution: 0.6749729144095341
- Micro F1 score on K&H+N: 0.9607706753842944
- Micro F1 score on ROOT09: 0.8994045753682232
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.789047619047619
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-a-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 1
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
osanseviero/da_core_news_sm | osanseviero | 2022-09-21T17:43:59Z | 1 | 0 | spacy | [
"spacy",
"token-classification",
"da",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
language:
- da
license: cc-by-sa-4.0
model-index:
- name: da_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7570498915
- name: NER Recall
type: recall
value: 0.7270833333
- name: NER F Score
type: f_score
value: 0.7417640808
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9498765073
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9498765073
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9343341404
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9449878935
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7988826816
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.752849162
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.884097035
---
### Details: https://spacy.io/models/da#da_core_news_sm
Danish pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner, attribute_ruler.
| Feature | Description |
| --- | --- |
| **Name** | `da_core_news_sm` |
| **Version** | `3.4.0` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Danish DDT v2.8](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://github.com/alexandrainst/danlp/blob/master/docs/datasets.md#danish-dependency-treebank-dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (194 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=CCONJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=ADP\|PartType=Inf`, `Degree=Pos\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=PART\|PartType=Inf`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Imp\|POS=VERB`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=ADV\|PartType=Inf`, `Degree=Sup\|POS=ADV`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|POS=PROPN`, `POS=ADP`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `POS=SPACE`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=INTJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=SYM`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Degree=Sup\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind\|Style=Arch`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Foreign=Yes\|POS=X`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Degree=Abs\|POS=ADV`, `POS=VERB\|VerbForm=Ger`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=PRON\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|POS=AUX`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=NOUN`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=NOUN` |
| **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `advmod:lmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:lmod`, `obl:tmod`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.95 |
| `TOKEN_P` | 99.78 |
| `TOKEN_R` | 99.75 |
| `TOKEN_F` | 99.76 |
| `POS_ACC` | 94.99 |
| `MORPH_ACC` | 93.43 |
| `MORPH_MICRO_P` | 95.72 |
| `MORPH_MICRO_R` | 94.69 |
| `MORPH_MICRO_F` | 95.20 |
| `SENTS_P` | 89.62 |
| `SENTS_R` | 87.23 |
| `SENTS_F` | 88.41 |
| `DEP_UAS` | 79.89 |
| `DEP_LAS` | 75.28 |
| `LEMMA_ACC` | 94.50 |
| `TAG_ACC` | 94.99 |
| `ENTS_P` | 75.70 |
| `ENTS_R` | 72.71 |
| `ENTS_F` | 74.18 | |
sd-concepts-library/dicoo2 | sd-concepts-library | 2022-09-21T17:35:48Z | 0 | 1 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T17:35:43Z | ---
license: mit
---
### Dicoo2 on Stable Diffusion
This is the `<dicoo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
research-backup/roberta-large-semeval2012-mask-prompt-d-nce-classification | research-backup | 2022-09-21T17:31:01Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T16:59:47Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.796765873015873
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6524064171122995
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6498516320474778
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7509727626459144
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.902
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6271929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.625
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9246647581738737
- name: F1 (macro)
type: f1_macro
value: 0.9201116139693363
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8826291079812206
- name: F1 (macro)
type: f1_macro
value: 0.74506786895136
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7172264355362946
- name: F1 (macro)
type: f1_macro
value: 0.703292242462215
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9616748974055783
- name: F1 (macro)
type: f1_macro
value: 0.8934154139843127
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9094327796928863
- name: F1 (macro)
type: f1_macro
value: 0.906471425124189
---
# relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.6524064171122995
- Accuracy on SAT: 0.6498516320474778
- Accuracy on BATS: 0.7509727626459144
- Accuracy on U2: 0.6271929824561403
- Accuracy on U4: 0.625
- Accuracy on Google: 0.902
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9246647581738737
- Micro F1 score on CogALexV: 0.8826291079812206
- Micro F1 score on EVALution: 0.7172264355362946
- Micro F1 score on K&H+N: 0.9616748974055783
- Micro F1 score on ROOT09: 0.9094327796928863
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.796765873015873
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 30
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
Harindu/blurr_IMDB_distilbert_classification | Harindu | 2022-09-21T17:17:00Z | 0 | 0 | fastai | [
"fastai",
"region:us"
]
| null | 2022-09-21T17:16:48Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Sindhana/hotdog-not-hotdog | Sindhana | 2022-09-21T17:02:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-09-21T03:17:01Z | ---
title: hotdog not hotdog
emoji: 🦀
colorFrom: purple
colorTo: purple
sdk: gradio
sdk_version: 3.1.7
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
research-backup/roberta-large-semeval2012-mask-prompt-c-nce-classification | research-backup | 2022-09-21T16:59:42Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T16:17:41Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.5331547619047619
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.2914438502673797
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.29080118694362017
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3913285158421345
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.486
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33771929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3263888888888889
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8392345939430466
- name: F1 (macro)
type: f1_macro
value: 0.8259066607574465
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7570422535211268
- name: F1 (macro)
type: f1_macro
value: 0.43666662077729007
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5926327193932828
- name: F1 (macro)
type: f1_macro
value: 0.5763337381530251
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9392780134937748
- name: F1 (macro)
type: f1_macro
value: 0.8298559683420568
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8934503290504543
- name: F1 (macro)
type: f1_macro
value: 0.8858359126040442
---
# relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.2914438502673797
- Accuracy on SAT: 0.29080118694362017
- Accuracy on BATS: 0.3913285158421345
- Accuracy on U2: 0.33771929824561403
- Accuracy on U4: 0.3263888888888889
- Accuracy on Google: 0.486
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8392345939430466
- Micro F1 score on CogALexV: 0.7570422535211268
- Micro F1 score on EVALution: 0.5926327193932828
- Micro F1 score on K&H+N: 0.9392780134937748
- Micro F1 score on ROOT09: 0.8934503290504543
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.5331547619047619
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <mask>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 1
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
SzegedAI/charmen-electra | SzegedAI | 2022-09-21T16:42:21Z | 106 | 1 | transformers | [
"transformers",
"pytorch",
"feature-extraction",
"byte representation",
"gradient boosting",
"hungarian",
"custom_code",
"hu",
"dataset:common_crawl",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
]
| feature-extraction | 2022-08-27T10:17:26Z | ---
language: hu
license: apache-2.0
datasets:
- common_crawl
- wikipedia
tags:
- byte representation
- gradient boosting
- hungarian
---
# Charmen-Electra
A byte-based transformer model trained on Hungarian language. In order to use the model you will need a custom Tokenizer which is available at: [https://github.com/szegedai/byte-offset-tokenizer](https://github.com/szegedai/byte-offset-tokenizer).
Since we use a custom architecture with Gradient Boosting, Down- and Up-Sampling, you have to enable Trusted Remote Code like:
```python
model = AutoModel.from_pretrained("SzegedAI/charmen-electra", trust_remote_code=True)
```
# Acknowledgement
[](https://mi.nemzetilabor.hu/) |
sd-concepts-library/sherhook-painting | sd-concepts-library | 2022-09-21T16:41:10Z | 0 | 4 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T16:41:04Z | ---
license: mit
---
### Sherhook Painting on Stable Diffusion
This is the `<sherhook>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
research-backup/roberta-large-semeval2012-mask-prompt-b-nce-classification | research-backup | 2022-09-21T16:17:35Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-09-21T15:45:17Z | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7908730158730158
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5080213903743316
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5192878338278932
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6653696498054474
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.84
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.45614035087719296
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5393518518518519
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9132138014163026
- name: F1 (macro)
type: f1_macro
value: 0.9101733559621606
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8502347417840377
- name: F1 (macro)
type: f1_macro
value: 0.6852576593859314
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6852654387865655
- name: F1 (macro)
type: f1_macro
value: 0.6694360423727916
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9604228976838005
- name: F1 (macro)
type: f1_macro
value: 0.8826948107609662
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9022250078345346
- name: F1 (macro)
type: f1_macro
value: 0.9002463330589072
---
# relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5080213903743316
- Accuracy on SAT: 0.5192878338278932
- Accuracy on BATS: 0.6653696498054474
- Accuracy on U2: 0.45614035087719296
- Accuracy on U4: 0.5393518518518519
- Accuracy on Google: 0.84
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9132138014163026
- Micro F1 score on CogALexV: 0.8502347417840377
- Micro F1 score on EVALution: 0.6852654387865655
- Micro F1 score on K&H+N: 0.9604228976838005
- Micro F1 score on ROOT09: 0.9022250078345346
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7908730158730158
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity
- split: train
- data_eval: relbert/conceptnet_high_confidence
- split_eval: full
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask>
- loss_function: nce_logout
- classification_loss: True
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 27
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- exclude_relation_eval: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
sd-concepts-library/detectivedinosaur1 | sd-concepts-library | 2022-09-21T16:06:29Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2022-09-21T16:06:18Z | ---
license: mit
---
### detectivedinosaur1 on Stable Diffusion
This is the `<dd1>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_data_coverage | teven | 2022-09-21T15:53:15Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T15:53:08Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_data_coverage
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_data_coverage')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_data_coverage')
model = AutoModel.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_data_coverage')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_data_coverage)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
teven/cross_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage | teven | 2022-09-21T15:52:36Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T15:52:29Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage')
model = AutoModel.from_pretrained('teven/cross_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage | teven | 2022-09-21T15:52:01Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T15:51:53Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage')
model = AutoModel.from_pretrained('teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all_bs160_allneg_finetuned_WebNLG2020_data_coverage)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
teven/bi_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage | teven | 2022-09-21T15:50:15Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2022-09-21T15:50:08Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# teven/bi_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/bi_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/bi_all_bs192_hardneg_finetuned_WebNLG2020_data_coverage)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 161 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 0,
"evaluator": "better_cross_encoder.PearsonCorrelationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 805,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
tianchez/autotrain-line_clip_no_nut_boltline_clip_no_nut_bolt-1523955096 | tianchez | 2022-09-21T15:49:25Z | 196 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:tianchez/autotrain-data-line_clip_no_nut_boltline_clip_no_nut_bolt",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
]
| image-classification | 2022-09-21T15:42:51Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- tianchez/autotrain-data-line_clip_no_nut_boltline_clip_no_nut_bolt
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 10.423410288264847
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1523955096
- CO2 Emissions (in grams): 10.4234
## Validation Metrics
- Loss: 0.580
- Accuracy: 0.798
- Macro F1: 0.542
- Micro F1: 0.798
- Weighted F1: 0.796
- Macro Precision: 0.548
- Micro Precision: 0.798
- Weighted Precision: 0.796
- Macro Recall: 0.537
- Micro Recall: 0.798
- Weighted Recall: 0.798 |
Subsets and Splits