modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-03 06:27:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-03 06:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Den4ikAI/DLM_CHITCHAT_700M
|
Den4ikAI
| 2023-05-18T15:22:10Z | 142 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-02T16:36:28Z |
---
license: mit
widget:
- text: "- У Артура было 17 пончиков, а потом он 3 съел. Сколько у него осталось пончиков? -"
- text: "- Привет! -"
- text: "- В чем смысл жизни? -"
- text: "- Стеклянный шар упал на бетонный стол. Что разбилось? -"
language:
- ru
---
Модель генеративного читчата на базе языковой модели DLM-700M
|
pphildan/vit-base-patch16-224-v17
|
pphildan
| 2023-05-18T15:20:30Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-18T14:35:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-v17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-v17
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0392
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2655 | 1.0 | 190 | 0.1454 | 0.9533 |
| 0.1577 | 2.0 | 380 | 0.0953 | 0.9659 |
| 0.0957 | 3.0 | 570 | 0.0392 | 0.9870 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
AnanthZeke/tabert-2k-naamapadam
|
AnanthZeke
| 2023-05-18T15:11:23Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-18T13:32:31Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tabert-2k-naamapadam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tabert-2k-naamapadam
This model is a fine-tuned version of [livinNector/tabert-2k](https://huggingface.co/livinNector/tabert-2k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2850
- Precision: 0.7765
- Recall: 0.8041
- F1: 0.7901
- Accuracy: 0.9065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4679 | 0.05 | 400 | 0.3991 | 0.7155 | 0.6561 | 0.6845 | 0.8720 |
| 0.3907 | 0.1 | 800 | 0.3632 | 0.7181 | 0.7233 | 0.7207 | 0.8822 |
| 0.3663 | 0.15 | 1200 | 0.3483 | 0.7271 | 0.7371 | 0.7321 | 0.8857 |
| 0.3557 | 0.21 | 1600 | 0.3457 | 0.7286 | 0.7506 | 0.7395 | 0.8874 |
| 0.3533 | 0.26 | 2000 | 0.3413 | 0.7371 | 0.7435 | 0.7403 | 0.8895 |
| 0.3396 | 0.31 | 2400 | 0.3326 | 0.7435 | 0.7546 | 0.7490 | 0.8910 |
| 0.3302 | 0.36 | 2800 | 0.3264 | 0.7528 | 0.7553 | 0.7540 | 0.8937 |
| 0.3344 | 0.41 | 3200 | 0.3231 | 0.7503 | 0.7720 | 0.7610 | 0.8951 |
| 0.3262 | 0.46 | 3600 | 0.3228 | 0.7387 | 0.7762 | 0.7570 | 0.8941 |
| 0.3186 | 0.51 | 4000 | 0.3158 | 0.7699 | 0.7666 | 0.7683 | 0.8986 |
| 0.3163 | 0.57 | 4400 | 0.3130 | 0.7453 | 0.7798 | 0.7622 | 0.8955 |
| 0.3143 | 0.62 | 4800 | 0.3150 | 0.7572 | 0.7751 | 0.7660 | 0.8961 |
| 0.3088 | 0.67 | 5200 | 0.3151 | 0.7543 | 0.7828 | 0.7683 | 0.8972 |
| 0.3115 | 0.72 | 5600 | 0.3141 | 0.7708 | 0.7706 | 0.7707 | 0.8977 |
| 0.3095 | 0.77 | 6000 | 0.3043 | 0.7657 | 0.7831 | 0.7743 | 0.8991 |
| 0.3044 | 0.82 | 6400 | 0.3087 | 0.7526 | 0.7881 | 0.7699 | 0.8972 |
| 0.2964 | 0.87 | 6800 | 0.3070 | 0.7644 | 0.7928 | 0.7783 | 0.8992 |
| 0.2972 | 0.93 | 7200 | 0.3102 | 0.7692 | 0.7738 | 0.7715 | 0.8999 |
| 0.2985 | 0.98 | 7600 | 0.3016 | 0.7731 | 0.7858 | 0.7794 | 0.9018 |
| 0.2822 | 1.03 | 8000 | 0.3049 | 0.7734 | 0.7909 | 0.7820 | 0.9031 |
| 0.2764 | 1.08 | 8400 | 0.3059 | 0.7575 | 0.7976 | 0.7770 | 0.9011 |
| 0.2752 | 1.13 | 8800 | 0.3052 | 0.7553 | 0.7996 | 0.7768 | 0.9015 |
| 0.2689 | 1.18 | 9200 | 0.2990 | 0.7642 | 0.7982 | 0.7808 | 0.9037 |
| 0.2738 | 1.23 | 9600 | 0.2985 | 0.7698 | 0.7987 | 0.7840 | 0.9035 |
| 0.2731 | 1.29 | 10000 | 0.2950 | 0.7713 | 0.7982 | 0.7845 | 0.9037 |
| 0.2694 | 1.34 | 10400 | 0.2920 | 0.7743 | 0.8017 | 0.7878 | 0.9059 |
| 0.2727 | 1.39 | 10800 | 0.2931 | 0.7693 | 0.7979 | 0.7834 | 0.9040 |
| 0.2622 | 1.44 | 11200 | 0.2946 | 0.7702 | 0.7942 | 0.7820 | 0.9032 |
| 0.2672 | 1.49 | 11600 | 0.2894 | 0.7724 | 0.8062 | 0.7890 | 0.9060 |
| 0.2601 | 1.54 | 12000 | 0.2907 | 0.7706 | 0.8010 | 0.7855 | 0.9058 |
| 0.2629 | 1.59 | 12400 | 0.2930 | 0.7628 | 0.8150 | 0.7880 | 0.9052 |
| 0.2635 | 1.65 | 12800 | 0.2907 | 0.7775 | 0.7970 | 0.7871 | 0.9047 |
| 0.2673 | 1.7 | 13200 | 0.2909 | 0.7753 | 0.7982 | 0.7866 | 0.9045 |
| 0.2726 | 1.75 | 13600 | 0.2880 | 0.7714 | 0.8048 | 0.7877 | 0.9054 |
| 0.2607 | 1.8 | 14000 | 0.2850 | 0.7760 | 0.8010 | 0.7883 | 0.9053 |
| 0.2684 | 1.85 | 14400 | 0.2847 | 0.7709 | 0.8077 | 0.7889 | 0.9059 |
| 0.2625 | 1.9 | 14800 | 0.2849 | 0.7742 | 0.8079 | 0.7907 | 0.9067 |
| 0.2631 | 1.95 | 15200 | 0.2850 | 0.7765 | 0.8041 | 0.7901 | 0.9065 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
docmparker/all-mpnet-base-v2-setfit-8label-edu
|
docmparker
| 2023-05-18T15:00:34Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-18T14:32:28Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# docmparker/all-mpnet-base-v2-setfit-8label-edu
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("docmparker/all-mpnet-base-v2-setfit-8label-edu")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
alvations/mt5-aym-lex
|
alvations
| 2023-05-18T14:59:24Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-10T04:38:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-aym-lex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-aym-lex
This model is a fine-tuned version of [alvations/mt5-aym-lex](https://huggingface.co/alvations/mt5-aym-lex) on the None dataset.
It achieves the following results on the evaluation set:
- Bleu: 3.1238
- Chrf: 24.4605
- Gen Len: 17.3872
- Loss: 0.1883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Bleu | Chrf | Gen Len | Validation Loss |
|:-------------:|:-----:|:-----:|:------:|:-------:|:-------:|:---------------:|
| 0.067 | 4.86 | 20000 | 2.9344 | 24.2586 | 17.5005 | 0.1844 |
| 0.065 | 9.71 | 40000 | 3.1238 | 24.4605 | 17.3872 | 0.1883 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HasinMDG/all-distilroberta-v1-IPTC-L1
|
HasinMDG
| 2023-05-18T14:54:50Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-18T12:52:01Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/all-distilroberta-v1-IPTC-L1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/all-distilroberta-v1-IPTC-L1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Acreedlmt/Gigi
|
Acreedlmt
| 2023-05-18T14:51:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-15T15:49:24Z |
---
license: creativeml-openrail-m
---
|
TootToot/FirstTaxi
|
TootToot
| 2023-05-18T14:40:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T14:40:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: FirstTaxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="TootToot/FirstTaxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MrPark97/distillbert-base-uncased-finetuned-clinc
|
MrPark97
| 2023-05-18T14:37:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T09:15:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distillbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
GraydientPlatformAPI/model_683
|
GraydientPlatformAPI
| 2023-05-18T14:17:48Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-18T14:07:52Z |
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
Smoden/A_MIX_W_diff_lora
|
Smoden
| 2023-05-18T14:11:44Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-18T11:50:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Smoden/A_MIX_W_diff_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
vorstcavry/webui
|
vorstcavry
| 2023-05-18T14:11:21Z | 4 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-23T10:47:16Z |
---
license: creativeml-openrail-m
---
|
damapika/roberta-base_mod
|
damapika
| 2023-05-18T14:10:23Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:quoref",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-22T09:40:45Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- quoref
model-index:
- name: roberta-base_mod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_mod
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the quoref dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6272 | 1.0 | 1213 | 1.4654 |
| 1.0583 | 2.0 | 2426 | 1.4134 |
| 0.6854 | 3.0 | 3639 | 1.5400 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
mousaazari/t5-text2sql_v3
|
mousaazari
| 2023-05-18T14:01:24Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-11T13:27:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-text2sql_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text2sql_v3
This model is a fine-tuned version of [mousaazari/t5-text2sql_v1](https://huggingface.co/mousaazari/t5-text2sql_v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1501
- Rouge2 Precision: 0.6088
- Rouge2 Recall: 0.3597
- Rouge2 Fmeasure: 0.4201
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| No log | 1.0 | 430 | 0.3126 | 0.3937 | 0.2301 | 0.2679 |
| 0.4851 | 2.0 | 860 | 0.2583 | 0.4656 | 0.2854 | 0.3289 |
| 0.3271 | 3.0 | 1290 | 0.2256 | 0.4858 | 0.2875 | 0.3337 |
| 0.2696 | 4.0 | 1720 | 0.2075 | 0.5193 | 0.3127 | 0.3614 |
| 0.2376 | 5.0 | 2150 | 0.1937 | 0.5387 | 0.3258 | 0.3773 |
| 0.2072 | 6.0 | 2580 | 0.1839 | 0.5524 | 0.3344 | 0.3876 |
| 0.1875 | 7.0 | 3010 | 0.1752 | 0.5644 | 0.3333 | 0.3882 |
| 0.1875 | 8.0 | 3440 | 0.1704 | 0.5751 | 0.3426 | 0.399 |
| 0.1736 | 9.0 | 3870 | 0.1653 | 0.5821 | 0.3458 | 0.4027 |
| 0.1585 | 10.0 | 4300 | 0.1603 | 0.5841 | 0.3435 | 0.4013 |
| 0.1498 | 11.0 | 4730 | 0.1576 | 0.5905 | 0.3535 | 0.4103 |
| 0.1427 | 12.0 | 5160 | 0.1548 | 0.6031 | 0.3533 | 0.4135 |
| 0.1342 | 13.0 | 5590 | 0.1541 | 0.5976 | 0.3519 | 0.411 |
| 0.1294 | 14.0 | 6020 | 0.1534 | 0.6058 | 0.3549 | 0.4161 |
| 0.1294 | 15.0 | 6450 | 0.1518 | 0.6117 | 0.3593 | 0.4203 |
| 0.1239 | 16.0 | 6880 | 0.1509 | 0.61 | 0.3597 | 0.4202 |
| 0.1198 | 17.0 | 7310 | 0.1508 | 0.6076 | 0.3588 | 0.4195 |
| 0.1147 | 18.0 | 7740 | 0.1503 | 0.6139 | 0.3607 | 0.4219 |
| 0.1155 | 19.0 | 8170 | 0.1503 | 0.6092 | 0.3597 | 0.4201 |
| 0.1115 | 20.0 | 8600 | 0.1501 | 0.6088 | 0.3597 | 0.4201 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.13.3
|
livinNector/IndicBERTv2-MLM-Sam-TLM-NER
|
livinNector
| 2023-05-18T13:44:09Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-15T17:56:40Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: IndicBERTv2-MLM-Sam-TLM-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndicBERTv2-MLM-Sam-TLM-NER
This model is a fine-tuned version of [ai4bharat/IndicBERTv2-MLM-Sam-TLM](https://huggingface.co/ai4bharat/IndicBERTv2-MLM-Sam-TLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4521
- Precision: 0.7629
- Recall: 0.7792
- F1: 0.7710
- Accuracy: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3268 | 0.49 | 1000 | 0.3440 | 0.7207 | 0.7602 | 0.7399 | 0.8887 |
| 0.2763 | 0.99 | 2000 | 0.3083 | 0.7568 | 0.7732 | 0.7649 | 0.8983 |
| 0.2604 | 1.48 | 3000 | 0.3312 | 0.7309 | 0.7494 | 0.7401 | 0.8909 |
| 0.2501 | 1.98 | 4000 | 0.3017 | 0.7415 | 0.7956 | 0.7676 | 0.9014 |
| 0.2269 | 2.47 | 5000 | 0.2930 | 0.7528 | 0.7970 | 0.7743 | 0.9050 |
| 0.223 | 2.96 | 6000 | 0.2963 | 0.7590 | 0.7963 | 0.7772 | 0.9053 |
| 0.2011 | 3.46 | 7000 | 0.2939 | 0.7627 | 0.7946 | 0.7783 | 0.9079 |
| 0.1999 | 3.95 | 8000 | 0.3036 | 0.7676 | 0.7903 | 0.7788 | 0.9069 |
| 0.1815 | 4.44 | 9000 | 0.3125 | 0.7618 | 0.7915 | 0.7764 | 0.9056 |
| 0.1777 | 4.94 | 10000 | 0.3083 | 0.7748 | 0.7957 | 0.7851 | 0.9098 |
| 0.1622 | 5.43 | 11000 | 0.3251 | 0.7721 | 0.7909 | 0.7814 | 0.9089 |
| 0.1598 | 5.93 | 12000 | 0.3197 | 0.7767 | 0.7947 | 0.7856 | 0.9092 |
| 0.145 | 6.42 | 13000 | 0.3366 | 0.7718 | 0.7986 | 0.7850 | 0.9101 |
| 0.1436 | 6.91 | 14000 | 0.3247 | 0.7776 | 0.7977 | 0.7875 | 0.9112 |
| 0.1306 | 7.41 | 15000 | 0.3502 | 0.7779 | 0.7958 | 0.7867 | 0.9107 |
| 0.1311 | 7.9 | 16000 | 0.3585 | 0.7857 | 0.7909 | 0.7883 | 0.9105 |
| 0.12 | 8.4 | 17000 | 0.3717 | 0.7768 | 0.7911 | 0.7839 | 0.9099 |
| 0.1202 | 8.89 | 18000 | 0.3667 | 0.7796 | 0.7882 | 0.7839 | 0.9100 |
| 0.1141 | 9.38 | 19000 | 0.3860 | 0.7857 | 0.7900 | 0.7879 | 0.9100 |
| 0.1113 | 9.88 | 20000 | 0.3824 | 0.7758 | 0.7970 | 0.7862 | 0.9094 |
| 0.1056 | 10.37 | 21000 | 0.4041 | 0.7740 | 0.7952 | 0.7845 | 0.9084 |
| 0.1073 | 10.86 | 22000 | 0.4062 | 0.7735 | 0.7929 | 0.7831 | 0.9094 |
| 0.1063 | 11.36 | 23000 | 0.4197 | 0.7720 | 0.7866 | 0.7793 | 0.9071 |
| 0.1026 | 11.85 | 24000 | 0.4179 | 0.7625 | 0.7767 | 0.7695 | 0.9040 |
| 0.1042 | 12.35 | 25000 | 0.4392 | 0.7639 | 0.7748 | 0.7693 | 0.9037 |
| 0.101 | 12.84 | 26000 | 0.4373 | 0.7533 | 0.7795 | 0.7662 | 0.9029 |
| 0.1003 | 13.33 | 27000 | 0.4554 | 0.7535 | 0.7774 | 0.7653 | 0.9021 |
| 0.0993 | 13.83 | 28000 | 0.4530 | 0.7555 | 0.7773 | 0.7663 | 0.9019 |
| 0.0978 | 14.32 | 29000 | 0.4467 | 0.7637 | 0.7843 | 0.7738 | 0.9050 |
| 0.0946 | 14.81 | 30000 | 0.4521 | 0.7629 | 0.7792 | 0.7710 | 0.9038 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
audreyfeldroy/ppo-Huggy
|
audreyfeldroy
| 2023-05-18T13:39:37Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-18T13:39:30Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: audreyfeldroy/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Mebosahr/Emj
|
Mebosahr
| 2023-05-18T13:38:07Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-18T13:38:07Z |
---
license: bigscience-openrail-m
---
|
AnanthZeke/tabert-1k-naamapadam
|
AnanthZeke
| 2023-05-18T13:30:15Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-18T11:37:46Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tabert-1k-naamapadam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tabert-1k-naamapadam
This model is a fine-tuned version of [livinNector/tabert-1k](https://huggingface.co/livinNector/tabert-1k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2825
- Precision: 0.7764
- Recall: 0.8055
- F1: 0.7907
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4618 | 0.05 | 400 | 0.3963 | 0.7329 | 0.6498 | 0.6889 | 0.8716 |
| 0.3869 | 0.1 | 800 | 0.3583 | 0.7145 | 0.7347 | 0.7244 | 0.8828 |
| 0.3642 | 0.15 | 1200 | 0.3511 | 0.7241 | 0.7412 | 0.7325 | 0.8842 |
| 0.3533 | 0.21 | 1600 | 0.3451 | 0.7393 | 0.7429 | 0.7411 | 0.8873 |
| 0.3501 | 0.26 | 2000 | 0.3367 | 0.7456 | 0.7562 | 0.7509 | 0.8899 |
| 0.3369 | 0.31 | 2400 | 0.3343 | 0.7476 | 0.7549 | 0.7512 | 0.8909 |
| 0.3302 | 0.36 | 2800 | 0.3282 | 0.7413 | 0.7584 | 0.7497 | 0.8926 |
| 0.3327 | 0.41 | 3200 | 0.3238 | 0.7584 | 0.7717 | 0.7650 | 0.8961 |
| 0.3248 | 0.46 | 3600 | 0.3209 | 0.7468 | 0.7795 | 0.7628 | 0.8956 |
| 0.3175 | 0.51 | 4000 | 0.3140 | 0.7659 | 0.7681 | 0.7670 | 0.8985 |
| 0.3132 | 0.57 | 4400 | 0.3111 | 0.7537 | 0.7795 | 0.7664 | 0.8970 |
| 0.3141 | 0.62 | 4800 | 0.3122 | 0.7529 | 0.7797 | 0.7661 | 0.8972 |
| 0.3077 | 0.67 | 5200 | 0.3138 | 0.7493 | 0.7844 | 0.7665 | 0.8974 |
| 0.309 | 0.72 | 5600 | 0.3099 | 0.7674 | 0.7729 | 0.7702 | 0.8992 |
| 0.3085 | 0.77 | 6000 | 0.3038 | 0.7626 | 0.7940 | 0.7780 | 0.9009 |
| 0.3031 | 0.82 | 6400 | 0.3055 | 0.7633 | 0.7834 | 0.7732 | 0.8992 |
| 0.2958 | 0.87 | 6800 | 0.3054 | 0.7621 | 0.7924 | 0.7770 | 0.8991 |
| 0.2953 | 0.93 | 7200 | 0.3076 | 0.7714 | 0.7834 | 0.7774 | 0.9005 |
| 0.2978 | 0.98 | 7600 | 0.3003 | 0.7729 | 0.7855 | 0.7792 | 0.9017 |
| 0.2826 | 1.03 | 8000 | 0.3016 | 0.7665 | 0.7905 | 0.7783 | 0.9012 |
| 0.2757 | 1.08 | 8400 | 0.3053 | 0.7520 | 0.8072 | 0.7786 | 0.8996 |
| 0.2751 | 1.13 | 8800 | 0.3026 | 0.7626 | 0.7982 | 0.7800 | 0.9008 |
| 0.2694 | 1.18 | 9200 | 0.2957 | 0.7682 | 0.8007 | 0.7841 | 0.9039 |
| 0.2723 | 1.23 | 9600 | 0.2944 | 0.7698 | 0.8005 | 0.7849 | 0.9039 |
| 0.2726 | 1.29 | 10000 | 0.2912 | 0.7774 | 0.7930 | 0.7851 | 0.9042 |
| 0.2674 | 1.34 | 10400 | 0.2912 | 0.7739 | 0.7973 | 0.7854 | 0.9043 |
| 0.2714 | 1.39 | 10800 | 0.2907 | 0.7729 | 0.7995 | 0.7860 | 0.9036 |
| 0.2625 | 1.44 | 11200 | 0.2949 | 0.7716 | 0.7965 | 0.7838 | 0.9041 |
| 0.2669 | 1.49 | 11600 | 0.2883 | 0.7701 | 0.8087 | 0.7889 | 0.9054 |
| 0.2601 | 1.54 | 12000 | 0.2868 | 0.7759 | 0.8069 | 0.7911 | 0.9066 |
| 0.2633 | 1.59 | 12400 | 0.2895 | 0.7659 | 0.8125 | 0.7885 | 0.9051 |
| 0.2641 | 1.65 | 12800 | 0.2878 | 0.7790 | 0.7972 | 0.7880 | 0.9059 |
| 0.2661 | 1.7 | 13200 | 0.2875 | 0.7800 | 0.7999 | 0.7898 | 0.9068 |
| 0.2719 | 1.75 | 13600 | 0.2853 | 0.7783 | 0.8025 | 0.7902 | 0.9070 |
| 0.2602 | 1.8 | 14000 | 0.2827 | 0.7801 | 0.8051 | 0.7924 | 0.9070 |
| 0.2688 | 1.85 | 14400 | 0.2819 | 0.7742 | 0.8061 | 0.7898 | 0.9066 |
| 0.2615 | 1.9 | 14800 | 0.2828 | 0.7764 | 0.8017 | 0.7888 | 0.9065 |
| 0.2623 | 1.95 | 15200 | 0.2825 | 0.7764 | 0.8055 | 0.7907 | 0.9068 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rafacel/ppo-LunarLander-v2
|
rafacel
| 2023-05-18T13:01:12Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T13:00:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.74 +/- 41.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HoldMyData/Taxi-v3-unit2
|
HoldMyData
| 2023-05-18T12:59:01Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T12:58:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-unit2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HoldMyData/Taxi-v3-unit2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kstn/mobilebert-uncased-finetuned-ner
|
kstn
| 2023-05-18T12:14:39Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"token-classification",
"generated_from_trainer",
"dataset:id_nergrit_corpus",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-18T06:37:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- id_nergrit_corpus
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mobilebert-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: id_nergrit_corpus
type: id_nergrit_corpus
config: ner
split: validation
args: ner
metrics:
- name: Precision
type: precision
value: 0.6699979179679367
- name: Recall
type: recall
value: 0.6136244458216141
- name: F1
type: f1
value: 0.6405732911990843
- name: Accuracy
type: accuracy
value: 0.8974442203210374
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert-uncased-finetuned-ner
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the id_nergrit_corpus dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3800
- Precision: 0.6700
- Recall: 0.6136
- F1: 0.6406
- Accuracy: 0.8974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6239 | 1.0 | 1567 | 0.4989 | 0.5842 | 0.4877 | 0.5316 | 0.8688 |
| 0.5356 | 2.0 | 3134 | 0.4003 | 0.6368 | 0.5879 | 0.6113 | 0.8905 |
| 0.4035 | 3.0 | 4701 | 0.3800 | 0.6700 | 0.6136 | 0.6406 | 0.8974 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
solmysh/mt5-small-finetuned-amazon-en-es
|
solmysh
| 2023-05-18T11:51:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-16T13:46:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0135
- Rouge1: 16.5421
- Rouge2: 7.9012
- Rougel: 16.2574
- Rougelsum: 16.1537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4509 | 1.0 | 1209 | 3.1308 | 17.5055 | 8.164 | 16.9714 | 16.8977 |
| 3.4226 | 2.0 | 2418 | 3.0489 | 16.7302 | 8.1598 | 16.3268 | 16.3168 |
| 3.286 | 3.0 | 3627 | 3.0366 | 16.7244 | 7.9017 | 16.3893 | 16.3728 |
| 3.1859 | 4.0 | 4836 | 3.0219 | 16.9671 | 8.0508 | 16.6206 | 16.5261 |
| 3.1249 | 5.0 | 6045 | 3.0353 | 17.3032 | 8.0195 | 16.9664 | 16.972 |
| 3.0665 | 6.0 | 7254 | 3.0272 | 17.0115 | 7.88 | 16.7424 | 16.7476 |
| 3.0407 | 7.0 | 8463 | 3.0122 | 17.3339 | 8.0171 | 16.9919 | 16.9449 |
| 3.0248 | 8.0 | 9672 | 3.0135 | 16.5421 | 7.9012 | 16.2574 | 16.1537 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jumelet/lm_training
|
jumelet
| 2023-05-18T11:36:59Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-29T15:01:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: lm_training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lm_training
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 1.10.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
AnanthZeke/tabert-500-naamapadam
|
AnanthZeke
| 2023-05-18T11:35:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-18T09:24:00Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tabert-500-naamapadam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tabert-500-naamapadam
This model is a fine-tuned version of [livinNector/tabert-500](https://huggingface.co/livinNector/tabert-500) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2821
- Precision: 0.7818
- Recall: 0.8089
- F1: 0.7951
- Accuracy: 0.9070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.4684 | 0.05 | 400 | 0.3956 | 0.6972 | 0.6926 | 0.6949 | 0.8720 |
| 0.3901 | 0.1 | 800 | 0.3706 | 0.7099 | 0.7338 | 0.7216 | 0.8811 |
| 0.3658 | 0.15 | 1200 | 0.3551 | 0.7349 | 0.7388 | 0.7369 | 0.8854 |
| 0.3535 | 0.21 | 1600 | 0.3445 | 0.7333 | 0.7458 | 0.7395 | 0.8875 |
| 0.3512 | 0.26 | 2000 | 0.3353 | 0.7547 | 0.7408 | 0.7477 | 0.8917 |
| 0.3377 | 0.31 | 2400 | 0.3302 | 0.7417 | 0.7636 | 0.7525 | 0.8916 |
| 0.3297 | 0.36 | 2800 | 0.3279 | 0.7681 | 0.7330 | 0.7501 | 0.8931 |
| 0.3331 | 0.41 | 3200 | 0.3252 | 0.7448 | 0.7833 | 0.7636 | 0.8961 |
| 0.3247 | 0.46 | 3600 | 0.3210 | 0.7479 | 0.7847 | 0.7659 | 0.8960 |
| 0.3175 | 0.51 | 4000 | 0.3155 | 0.7684 | 0.7597 | 0.7640 | 0.8975 |
| 0.3142 | 0.57 | 4400 | 0.3113 | 0.7510 | 0.7833 | 0.7668 | 0.8977 |
| 0.315 | 0.62 | 4800 | 0.3131 | 0.7574 | 0.7830 | 0.7700 | 0.8969 |
| 0.3078 | 0.67 | 5200 | 0.3155 | 0.7569 | 0.7821 | 0.7693 | 0.8980 |
| 0.3101 | 0.72 | 5600 | 0.3117 | 0.7708 | 0.7730 | 0.7719 | 0.8990 |
| 0.3078 | 0.77 | 6000 | 0.3070 | 0.7665 | 0.7824 | 0.7744 | 0.8992 |
| 0.304 | 0.82 | 6400 | 0.3055 | 0.7680 | 0.7875 | 0.7776 | 0.8992 |
| 0.2954 | 0.87 | 6800 | 0.3019 | 0.7675 | 0.7929 | 0.7800 | 0.9002 |
| 0.2955 | 0.93 | 7200 | 0.3107 | 0.7804 | 0.7755 | 0.7779 | 0.9000 |
| 0.2979 | 0.98 | 7600 | 0.2992 | 0.7721 | 0.7931 | 0.7825 | 0.9021 |
| 0.2816 | 1.03 | 8000 | 0.3022 | 0.7695 | 0.7971 | 0.7831 | 0.9029 |
| 0.2768 | 1.08 | 8400 | 0.3043 | 0.7538 | 0.8045 | 0.7783 | 0.9003 |
| 0.2775 | 1.13 | 8800 | 0.2990 | 0.7687 | 0.8003 | 0.7842 | 0.9024 |
| 0.2704 | 1.18 | 9200 | 0.2948 | 0.7724 | 0.7987 | 0.7853 | 0.9023 |
| 0.2734 | 1.23 | 9600 | 0.2932 | 0.7764 | 0.7993 | 0.7877 | 0.9041 |
| 0.2746 | 1.29 | 10000 | 0.2918 | 0.7841 | 0.7949 | 0.7894 | 0.9046 |
| 0.2678 | 1.34 | 10400 | 0.2909 | 0.7775 | 0.8039 | 0.7905 | 0.9046 |
| 0.272 | 1.39 | 10800 | 0.2909 | 0.7786 | 0.7952 | 0.7868 | 0.9034 |
| 0.2636 | 1.44 | 11200 | 0.2900 | 0.7815 | 0.7959 | 0.7886 | 0.9044 |
| 0.2663 | 1.49 | 11600 | 0.2863 | 0.7747 | 0.8086 | 0.7913 | 0.9047 |
| 0.2617 | 1.54 | 12000 | 0.2876 | 0.7759 | 0.8042 | 0.7898 | 0.9051 |
| 0.2634 | 1.59 | 12400 | 0.2896 | 0.7677 | 0.8123 | 0.7894 | 0.9038 |
| 0.2651 | 1.65 | 12800 | 0.2871 | 0.7799 | 0.8024 | 0.7910 | 0.9058 |
| 0.2676 | 1.7 | 13200 | 0.2870 | 0.7863 | 0.8008 | 0.7935 | 0.9061 |
| 0.273 | 1.75 | 13600 | 0.2836 | 0.7804 | 0.8108 | 0.7953 | 0.9064 |
| 0.2611 | 1.8 | 14000 | 0.2821 | 0.7821 | 0.8052 | 0.7935 | 0.9064 |
| 0.2683 | 1.85 | 14400 | 0.2815 | 0.7791 | 0.8108 | 0.7946 | 0.9064 |
| 0.2624 | 1.9 | 14800 | 0.2818 | 0.7819 | 0.8090 | 0.7952 | 0.9071 |
| 0.2628 | 1.95 | 15200 | 0.2821 | 0.7818 | 0.8089 | 0.7951 | 0.9070 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/mmnist_JMVAEconfig2_seed_0_ratio_0_c
|
asenella
| 2023-05-18T11:23:59Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-18T11:23:46Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
muhammadravi251001/fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-with-freeze-LR-1e-05
|
muhammadravi251001
| 2023-05-18T11:22:25Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-05T05:18:50Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-with-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-Squad-ID-with-indobert-base-uncased-with-ITTL-with-freeze-LR-1e-05
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5175
- Exact Match: 48.5572
- F1: 65.0249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 2.0255 | 0.5 | 463 | 1.8578 | 38.8323 | 53.2780 |
| 1.8396 | 1.0 | 926 | 1.6659 | 43.2069 | 59.4121 |
| 1.6258 | 1.5 | 1389 | 1.5971 | 45.0913 | 61.6718 |
| 1.5939 | 2.0 | 1852 | 1.5523 | 46.3447 | 62.8415 |
| 1.4904 | 2.5 | 2315 | 1.5345 | 46.9589 | 63.7167 |
| 1.5015 | 3.0 | 2778 | 1.5060 | 47.4889 | 64.4261 |
| 1.3787 | 3.5 | 3241 | 1.5092 | 47.7833 | 64.2215 |
| 1.3629 | 4.0 | 3704 | 1.4885 | 48.0273 | 64.6938 |
| 1.3229 | 4.5 | 4167 | 1.5174 | 48.2712 | 64.9266 |
| 1.2848 | 5.0 | 4630 | 1.4942 | 48.4899 | 64.9576 |
| 1.2703 | 5.5 | 5093 | 1.5074 | 48.5657 | 65.0539 |
| 1.2104 | 6.0 | 5556 | 1.5112 | 48.1114 | 64.6513 |
| 1.1775 | 6.5 | 6019 | 1.5004 | 48.1534 | 64.8169 |
| 1.2303 | 7.0 | 6482 | 1.4956 | 48.4647 | 65.0723 |
| 1.1673 | 7.5 | 6945 | 1.5151 | 48.5825 | 65.0862 |
| 1.1771 | 8.0 | 7408 | 1.5057 | 48.5657 | 65.0123 |
| 1.1172 | 8.5 | 7871 | 1.5286 | 48.4311 | 64.7537 |
| 1.1282 | 9.0 | 8334 | 1.5175 | 48.5572 | 65.0249 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
rizvandwiki/gender-classification
|
rizvandwiki
| 2023-05-18T11:16:33Z | 2,039,551 | 48 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-12-06T08:53:43Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: gender-classification
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9244444370269775
---
# gender-classification
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### female

#### male

|
Intel/whisper-large-int8-dynamic-inc
|
Intel
| 2023-05-18T11:15:24Z | 8 | 1 |
transformers
|
[
"transformers",
"onnx",
"whisper",
"automatic-speech-recognition",
"int8",
"ONNX",
"PostTrainingDynamic",
"Intel® Neural Compressor",
"neural-compressor",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-10T09:02:21Z |
---
license: apache-2.0
datasets:
- librispeech_asr
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- automatic-speech-recognition
- int8
- ONNX
- PostTrainingDynamic
- Intel® Neural Compressor
- neural-compressor
library_name: transformers
---
## Model Details: INT8 Whisper large
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains without the need for fine-tuning.
This int8 ONNX model is generated by [neural-compressor](https://github.com/intel/neural-compressor) and the fp32 model can be exported with below command:
```shell
optimum-cli export onnx --model openai/whisper-large whisper-large-with-past/ --task automatic-speech-recognition-with-past --opset 13
```
| Model Detail | Description |
| ----------- | ----------- |
| Model Authors - Company | Intel |
| Date | May 15, 2022 |
| Version | 1 |
| Type | Speech Recognition |
| Paper or Other Resources | - |
| License | Apache 2.0 |
| Questions or Comments | [Community Tab](https://huggingface.co/Intel/whisper-large-int8-dynamic/discussions)|
| Intended Use | Description |
| ----------- | ----------- |
| Primary intended uses | You can use the raw model for automatic speech recognition inference |
| Primary intended users | Anyone doing automatic speech recognition inference |
| Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.|
### How to use
Download the model by cloning the repository:
```shell
git clone https://huggingface.co/Intel/whisper-large-int8-dynamic
```
Evaluate the model with below code:
```python
import os
from evaluate import load
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor, AutoConfig
model_name = 'openai/whisper-large'
model_path = 'whisper-large-int8-dynamic'
processor = WhisperProcessor.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
wer = load("wer")
librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from transformers import PretrainedConfig
model_config = PretrainedConfig.from_pretrained(model_name)
predictions = []
references = []
sessions = ORTModelForSpeechSeq2Seq.load_model(
os.path.join(model_path, 'encoder_model.onnx'),
os.path.join(model_path, 'decoder_model.onnx'),
os.path.join(model_path, 'decoder_with_past_model.onnx'))
model = ORTModelForSpeechSeq2Seq(sessions[0], sessions[1], model_config, model_path, sessions[2])
for idx, batch in enumerate(librispeech_test_clean):
audio = batch["audio"]
input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
reference = processor.tokenizer._normalize(batch['text'])
references.append(reference)
predicted_ids = model.generate(input_features)[0]
transcription = processor.decode(predicted_ids)
prediction = processor.tokenizer._normalize(transcription)
predictions.append(prediction)
wer_result = wer.compute(references=references, predictions=predictions)
print(f"Result wer: {wer_result * 100}")
accuracy = 1 - wer_result
print("Accuracy: %.5f" % accuracy)
```
## Metrics (Model Performance):
| Model | Model Size (GB) | wer |
|---|:---:|:---:|
| FP32 |9.4|3.04|
| INT8 |2.4|2.89|
|
Iwansl/Rere
|
Iwansl
| 2023-05-18T11:07:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-18T11:06:21Z |
---
license: creativeml-openrail-m
---
|
WALIDALI/walidlibyaly-burjkhalifaly-bekiksrily-libyatraclo
|
WALIDALI
| 2023-05-18T11:04:15Z | 31 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-18T10:35:16Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### walidlibyaly-burjkhalifaly-bekiksrily-libyatraclo Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
bhattronak14/distilbert-base-uncased-finetuned-rte
|
bhattronak14
| 2023-05-18T10:55:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-17T06:31:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rte
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
pmysl/805Na-diffusers
|
pmysl
| 2023-05-18T10:20:29Z | 31 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-03-12T03:07:23Z |
---
pipeline_tag: text-to-image
widget:
- text: "A photo of sks tram in the Minecraft style"
example_title: "Minecraft"
- text: "A photo of sks tram with the Eiffel Tower in the background"
example_title: "Eiffel Tower"
- text: "A photo of sks tram on the Mars"
example_title: "Mars"
---
This is a fine-tuned Stable Diffusion model designed to create images of Konstal 805Na. Use `sks tram` in the prompt when you are referring to 805Na
|
HAttORi/ICBINP-Photorealistic
|
HAttORi
| 2023-05-18T10:18:49Z | 0 | 3 | null |
[
"art",
"text-to-image",
"region:us"
] |
text-to-image
| 2023-05-18T09:38:16Z |
---
pipeline_tag: text-to-image
tags:
- art
---
|
SHENMU007/neunit_tts_1.1
|
SHENMU007
| 2023-05-18T10:18:03Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-05-18T08:12:46Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
Ashutosh1976/Ashutosh1976
|
Ashutosh1976
| 2023-05-18T10:13:47Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-05-18T08:34:03Z |
---
license: bigcode-openrail-m
---
|
Lkhappy/1
|
Lkhappy
| 2023-05-18T09:57:20Z | 0 | 0 | null |
[
"aa",
"dataset:databricks/databricks-dolly-15k",
"license:openrail",
"region:us"
] | null | 2023-05-18T09:56:42Z |
---
license: openrail
datasets:
- databricks/databricks-dolly-15k
language:
- aa
metrics:
- accuracy
---
|
DarrenLo/fine-tuned-dialogpt-pal
|
DarrenLo
| 2023-05-18T09:53:34Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:empathetic_dialogues",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-18T08:07:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- empathetic_dialogues
model-index:
- name: fine-tuned-dialogpt-pal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-dialogpt-pal
This model is a fine-tuned version of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) on the empathetic_dialogues dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DD0101/disfluency-large-2
|
DD0101
| 2023-05-18T09:48:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-18T08:53:54Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: disfluency-large-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disfluency-large-2
This model is a fine-tuned version of [vinai/phobert-large](https://huggingface.co/vinai/phobert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0318
- Precision: 0.9837
- Recall: 0.9808
- F1: 0.9822
- Accuracy: 0.9946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 140 | 0.0439 | 0.9538 | 0.9561 | 0.9550 | 0.9890 |
| No log | 2.0 | 280 | 0.0314 | 0.9660 | 0.9736 | 0.9698 | 0.9906 |
| No log | 3.0 | 420 | 0.0394 | 0.9710 | 0.9651 | 0.9681 | 0.9909 |
| 0.1105 | 4.0 | 560 | 0.0320 | 0.9795 | 0.9784 | 0.9790 | 0.9929 |
| 0.1105 | 5.0 | 700 | 0.0450 | 0.9704 | 0.9657 | 0.9681 | 0.9904 |
| 0.1105 | 6.0 | 840 | 0.0463 | 0.9776 | 0.9694 | 0.9734 | 0.9911 |
| 0.1105 | 7.0 | 980 | 0.0480 | 0.9706 | 0.9712 | 0.9709 | 0.9909 |
| 0.0113 | 8.0 | 1120 | 0.0318 | 0.9837 | 0.9808 | 0.9822 | 0.9946 |
| 0.0113 | 9.0 | 1260 | 0.0419 | 0.9699 | 0.9669 | 0.9684 | 0.9915 |
| 0.0113 | 10.0 | 1400 | 0.0458 | 0.9735 | 0.9712 | 0.9723 | 0.9920 |
| 0.0051 | 11.0 | 1540 | 0.0309 | 0.9777 | 0.9766 | 0.9771 | 0.9935 |
| 0.0051 | 12.0 | 1680 | 0.0232 | 0.9820 | 0.9820 | 0.9820 | 0.9951 |
| 0.0051 | 13.0 | 1820 | 0.0344 | 0.9849 | 0.9784 | 0.9816 | 0.9945 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aliakyurek/Taxi-v3
|
aliakyurek
| 2023-05-18T09:30:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T09:30:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aliakyurek/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RenauxLouis/monet-test-1000steps-116-realsize-v2
|
RenauxLouis
| 2023-05-18T09:27:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-18T08:32:23Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - RenauxLouis/monet-test-1000steps-116-realsize-v2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the real-size-116 dataset. You can find some example images in the following.




|
guoguangjie/my_wikilingua_t5small
|
guoguangjie
| 2023-05-18T08:56:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-18T08:43:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_wikilingua_t5small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_wikilingua_t5small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6035
- Rouge1: 0.2226
- Rouge2: 0.0638
- Rougel: 0.1839
- Rougelsum: 0.1838
- Gen Len: 18.725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 100 | 2.7179 | 0.2156 | 0.0579 | 0.1742 | 0.1741 | 18.835 |
| No log | 2.0 | 200 | 2.6370 | 0.2213 | 0.0637 | 0.1796 | 0.1794 | 18.805 |
| No log | 3.0 | 300 | 2.6105 | 0.2239 | 0.064 | 0.1834 | 0.1833 | 18.79 |
| No log | 4.0 | 400 | 2.6035 | 0.2226 | 0.0638 | 0.1839 | 0.1838 | 18.725 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
metalis/pythia_410m_dialog_test_v1
|
metalis
| 2023-05-18T08:40:51Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-04T23:19:12Z |
---
license: apache-2.0
---
Pythia 410m model fine tuned for dialog.
Example prompt
```
###I###
Jhon talks to Mike.
Jhon tells Mary about how he likes his new job.
happy
###P###
Jhon: ...
Mary: ...
```
|
jroberts/distilgpt2-ft
|
jroberts
| 2023-05-18T08:39:36Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-18T08:37:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-ft
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000166
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 16 | 2.2852 |
| No log | 2.0 | 32 | 2.2098 |
| No log | 3.0 | 48 | 2.2370 |
| No log | 4.0 | 64 | 2.3000 |
| No log | 5.0 | 80 | 2.3898 |
| No log | 6.0 | 96 | 2.4586 |
| No log | 7.0 | 112 | 2.5484 |
| No log | 8.0 | 128 | 2.6572 |
| No log | 9.0 | 144 | 2.7703 |
| No log | 10.0 | 160 | 2.9010 |
| No log | 11.0 | 176 | 2.9734 |
| No log | 12.0 | 192 | 3.0461 |
| No log | 13.0 | 208 | 3.1837 |
| No log | 14.0 | 224 | 3.2359 |
| No log | 15.0 | 240 | 3.2506 |
| No log | 16.0 | 256 | 3.2979 |
| No log | 17.0 | 272 | 3.3512 |
| No log | 18.0 | 288 | 3.3811 |
| No log | 19.0 | 304 | 3.3787 |
| No log | 20.0 | 320 | 3.3824 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Billsfriend/chinese-Alpaca-7b-plus-ggml-q8_0
|
Billsfriend
| 2023-05-18T08:33:49Z | 0 | 9 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-05-11T11:50:20Z |
---
license: apache-2.0
---
This model is converted from `decapoda-research/llama-7b-hf` to `ziqingyang/chinese-alpaca-plus-lora-7b` and quantized for use with `ggerganov/llama.cpp`.
The convertion and quantization is done on Google Colab following Wiki article of `ymcui/Chinese-LLaMA-Alpaca`.
The quantization methods have been updated for llama.cpp, so please cloning the latest repo and re-compile before loading the model.
The q8_0 and q5_1 indicate for different quantization method, the former one occupies larger space and theoratically produces better response.
Example of q8_0 model on llama.cpp (acceptable responses but very short):
```
> ./main -m ./models/chinese-Alpaca-7b-plus-ggml-q8_0.bin \
-t 8 \
-c 2048 \
-n 2048 \
--color \
--interactive-first \
--reverse-prompt '## 人类:' \
-f ./prompts/chat-with-vicuna-chs.txt
main: build = 0 (unknown)
main: seed = 1683883289
llama.cpp: loading model from ./models/chinese-Alpaca-7b-plus-ggml-q8_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 49954
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 7 (mostly Q8_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 68.20 KB
llama_model_load_internal: mem required = 9180.13 MB (+ 1026.00 MB per state)
llama_init_from_file: kv self size = 1024.00 MB
system_info: n_threads = 8 / 12 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
Reverse prompt: '## 人类:'
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 2048, n_batch = 512, n_predict = 2048, n_keep = 0
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMa.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
这是一段人类和人工智能助理的对话。人工智能助理会对人类的问题给出详细、准确、礼貌的回答。
## 人类:你好,助理。
## 助理:你好,有什么需要帮助的吗?
## 人类:天空为什么是蓝色的?
助理:天空之所以呈现蓝色,是因为太阳光在大气层中发生散射作用。
## 人类:请再详细地解释一下。
助理:太阳光穿过大气层时,一部分被散射到不同波长的角度上,其中蓝色光的波长比其他颜色短。
## 人类:你真是惜字如金啊。
助理:谢谢你的夸奖,但我只是根据我学到的知识和算法来回答你的问题。
## 人类:
llama_print_timings: load time = 9418.31 ms
llama_print_timings: sample time = 107.95 ms / 73 runs ( 1.48 ms per run)
llama_print_timings: prompt eval time = 8645.76 ms / 85 tokens ( 101.71 ms per token)
llama_print_timings: eval time = 16303.43 ms / 73 runs ( 223.33 ms per run)
llama_print_timings: total time = 987546.29 ms
```
|
QuickSilver007/rlv2unit4_Reinforce-CartPole-v1
|
QuickSilver007
| 2023-05-18T08:28:35Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T08:28:25Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: rlv2unit4_Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ViktorDo/bert-finetuned-ner
|
ViktorDo
| 2023-05-18T08:21:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-15T11:27:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9322761810373307
- name: Recall
type: recall
value: 0.9498485358465163
- name: F1
type: f1
value: 0.9409803267755917
- name: Accuracy
type: accuracy
value: 0.9862541943839407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0613
- Precision: 0.9323
- Recall: 0.9498
- F1: 0.9410
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0907 | 1.0 | 1756 | 0.0649 | 0.9211 | 0.9371 | 0.9290 | 0.9832 |
| 0.0352 | 2.0 | 3512 | 0.0612 | 0.9310 | 0.9493 | 0.9401 | 0.9863 |
| 0.0164 | 3.0 | 5268 | 0.0613 | 0.9323 | 0.9498 | 0.9410 | 0.9863 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
qianjiaying/simcse-tinybert
|
qianjiaying
| 2023-05-18T08:21:20Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-18T08:18:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1254 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
fshfurnitures/Bedfurnituredubai
|
fshfurnitures
| 2023-05-18T08:16:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-18T08:14:36Z |
[furniture stores](https://fshfurniture.ae/)
|
SHENMU007/neunit_tts_1.0
|
SHENMU007
| 2023-05-18T07:58:28Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-05-18T06:15:59Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
egarciamartin/ppo-SnowballTarget
|
egarciamartin
| 2023-05-18T07:57:35Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-18T07:56:31Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: egarciamartin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jiawei1998/metaner-base
|
jiawei1998
| 2023-05-18T07:48:26Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-11T06:13:01Z |
---
language:
- en
---
Related to https://github.com/chen700564/metaner-icl
|
seonglae/openie5
|
seonglae
| 2023-05-18T07:34:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-05-18T01:05:49Z |
openjdk8 64
java -Xmx10g -XX:+UseConcMarkSweepGC -jar openie-assembly-5.0-SNAPSHOT.jar
[CLI Option](https://texonom.com/0b296be12ed64e9f9f94e2567bd798e8)
|
scarlettlin/path-to-save-model
|
scarlettlin
| 2023-05-18T07:25:20Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-18T06:10:04Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of a T1-MRI brain scan in axial view
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - scarlettlin/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a T1-MRI brain scan in axial view using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
wa976/ast_15-finetuned-ICBHI
|
wa976
| 2023-05-18T07:10:58Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-17T18:18:53Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ast_15-finetuned-ICBHI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast_15-finetuned-ICBHI
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1688
- Accuracy: 0.5397
- Sensitivity: 0.2727
- Specificity: 0.7389
- Score: 0.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Sensitivity | Specificity | Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:-----------:|:------:|
| 0.7488 | 1.0 | 259 | 1.1831 | 0.5241 | 0.3551 | 0.6502 | 0.5027 |
| 0.7831 | 2.0 | 518 | 1.1688 | 0.5397 | 0.2727 | 0.7389 | 0.5058 |
| 0.7471 | 3.0 | 777 | 1.1593 | 0.5198 | 0.3772 | 0.6261 | 0.5017 |
| 0.5336 | 4.0 | 1036 | 1.4082 | 0.5281 | 0.3152 | 0.6869 | 0.5011 |
| 0.3833 | 5.0 | 1295 | 2.0232 | 0.4838 | 0.3840 | 0.5583 | 0.4712 |
| 0.1721 | 6.0 | 1554 | 2.5558 | 0.4893 | 0.3534 | 0.5906 | 0.4720 |
| 0.2745 | 7.0 | 1813 | 3.3175 | 0.4900 | 0.3917 | 0.5634 | 0.4775 |
| 0.0596 | 8.0 | 2072 | 3.6548 | 0.5143 | 0.3628 | 0.6274 | 0.4951 |
| 0.0034 | 9.0 | 2331 | 3.9119 | 0.5082 | 0.4053 | 0.5849 | 0.4951 |
| 0.0008 | 10.0 | 2590 | 4.3407 | 0.4875 | 0.4562 | 0.5108 | 0.4835 |
| 0.0 | 11.0 | 2849 | 4.1927 | 0.5136 | 0.3636 | 0.6255 | 0.4946 |
| 0.0 | 12.0 | 3108 | 4.2227 | 0.5111 | 0.3645 | 0.6204 | 0.4924 |
| 0.0 | 13.0 | 3367 | 4.2399 | 0.5114 | 0.3653 | 0.6204 | 0.4929 |
| 0.0 | 14.0 | 3626 | 4.2521 | 0.5114 | 0.3662 | 0.6198 | 0.4930 |
| 0.0 | 15.0 | 3885 | 4.2556 | 0.5114 | 0.3662 | 0.6198 | 0.4930 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jungnerd/jungnerd_qa_model
|
jungnerd
| 2023-05-18T07:07:15Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-18T02:04:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: jungnerd_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jungnerd_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4348 |
| 2.7609 | 2.0 | 500 | 1.7421 |
| 2.7609 | 3.0 | 750 | 1.6623 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
charlieoneill/lunar_new
|
charlieoneill
| 2023-05-18T06:57:55Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T06:55:48Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -231.53 +/- 121.00
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
nightdessert/WeCheck
|
nightdessert
| 2023-05-18T06:42:14Z | 97 | 2 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"text-generation",
"arxiv:2212.10057",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-16T03:57:44Z |
---
pipeline_tag: text-generation
---
# Factual Consistency Evaluator/Metric in ACL 2023 paper
*[WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning
](https://arxiv.org/abs/2212.10057)*
Open-sourced code: https://github.com/nightdessert/WeCheck
## Model description
WeCheck is a factual consistency metric trained from weakly annotated samples.
This WeCheck checkpoint can be used to check the following three generation tasks:
**Text Summarization/Knowlege grounded dialogue Generation/Paraphrase**
This WeCheck checkpoint is trained based on the following three weak labler:
*[QAFactEval
](https://github.com/salesforce/QAFactEval)* / *[Summarc](https://github.com/tingofurro/summac)* / *[NLI warmup](https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli)*
---
# How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "nightdessert/WeCheck"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." # Input for Summarization/ Dialogue / Paraphrase
hypothesis = "The movie was not good." # Output for Summarization/ Dialogue / Paraphrase
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt", truncation_strategy="only_first", max_length=512)
output = model(input["input_ids"].to(device))['logits'][:,0] # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction) #0.884
```
or apply for a batch of samples
```python
premise = ["I first thought that I liked the movie, but upon second thought it was actually disappointing."]*3 # Input list for Summarization/ Dialogue / Paraphrase
hypothesis = ["The movie was not good."]*3 # Output list for Summarization/ Dialogue / Paraphrase
batch_tokens = tokenizer.batch_encode_plus(list(zip(premise, hypothesis)), padding=True,
truncation=True, max_length=512, return_tensors="pt", truncation_strategy="only_first")
output = model(batch_tokens["input_ids"].to(device))['logits'][:,0] # device = "cuda:0" or "cpu"
prediction = torch.sigmoid(output).tolist()
print(prediction) #[0.884,0.884,0.884]
```
license: openrail
pipeline_tag: text-classification
tags:
- Factual Consistency
- Natrual Language Inference
---
language:
- en
tags:
- Factual Consistency Evaluation
|
gkrishnan/distilbert_classifier_newsgroups
|
gkrishnan
| 2023-05-18T06:39:35Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T06:39:03Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lewdryuna/A-hakomay
|
lewdryuna
| 2023-05-18T06:08:30Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-05-18T06:08:30Z |
---
duplicated_from: 852wa/hakoMay
---
|
jokyere49/Reinforce-pixelCopter
|
jokyere49
| 2023-05-18T06:01:45Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T05:59:58Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 42.00 +/- 29.61
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
asenella/mmnist_JNFDccaconfig2_seed_3_ratio_0_c
|
asenella
| 2023-05-18T05:58:30Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-18T05:58:22Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
mortal99/test
|
mortal99
| 2023-05-18T05:52:16Z | 0 | 0 | null |
[
"paddlepaddle",
"stable-diffusion",
"stable-diffusion-ppdiffusers",
"text-to-image",
"ppdiffusers",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-18T05:47:47Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: A picture of <target> coding
tags:
- stable-diffusion
- stable-diffusion-ppdiffusers
- text-to-image
- ppdiffusers
- lora
inference: false
---
# LoRA DreamBooth - mortal99/test
本仓库的 LoRA 权重是基于 runwayml/stable-diffusion-v1-5 训练而来的,我们采用[DreamBooth](https://dreambooth.github.io/)的技术并使用 A picture of <target> coding 文本进行了训练。
|
AustinCarthy/Benign10MGPT2_fromB_BFall_30KGen_toP_0.75
|
AustinCarthy
| 2023-05-18T05:44:42Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T02:42:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_fromB_BFall_30KGen_toP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_fromB_BFall_30KGen_toP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1066
- Accuracy: 0.9827
- F1: 0.7997
- Precision: 0.8920
- Recall: 0.7248
- Roc Auc Score: 0.8602
- Tpr At Fpr 0.01: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0859 | 1.0 | 26250 | 0.0749 | 0.9823 | 0.7832 | 0.9388 | 0.6718 | 0.8348 | 0.5556 |
| 0.074 | 2.0 | 52500 | 0.0810 | 0.9803 | 0.7718 | 0.8628 | 0.6982 | 0.8463 | 0.5496 |
| 0.0534 | 3.0 | 78750 | 0.0735 | 0.9846 | 0.8211 | 0.9211 | 0.7406 | 0.8687 | 0.5882 |
| 0.0374 | 4.0 | 105000 | 0.0877 | 0.9830 | 0.8023 | 0.8976 | 0.7254 | 0.8606 | 0.0 |
| 0.0267 | 5.0 | 131250 | 0.1066 | 0.9827 | 0.7997 | 0.8920 | 0.7248 | 0.8602 | 0.0 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
suraj47K/keras-dummy-sequential
|
suraj47K
| 2023-05-18T05:42:09Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-05-18T05:42:07Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
asenella/mmnist_JNFDccaconfig2_seed_0_ratio_02_c
|
asenella
| 2023-05-18T05:34:00Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-18T05:33:53Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Mikepool117/a2c-AntBulletEnv-v0
|
Mikepool117
| 2023-05-18T05:24:38Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-18T05:22:48Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1385.41 +/- 178.51
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
|
junweiliao/ppo-Huggy
|
junweiliao
| 2023-05-18T05:21:28Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-18T05:21:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: junweiliao/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deutsche-telekom/mt5-small-sum-de-mit-v1
|
deutsche-telekom
| 2023-05-18T05:02:05Z | 2,243 | 8 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"de",
"dataset:swiss_text_2019",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- de
license: mit
tags:
- summarization
datasets:
- swiss_text_2019
---
# mT5-small-sum-de-mit-v1
This is a German summarization model. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). The special characteristic of this model is that, unlike many other models, it is licensed under a permissive open source license (MIT). Among other things, this license allows commercial use.
[](https://www.welove.ai/)
This model is provided by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Training
The training was conducted with the following hyperparameters:
- base model: [google/mt5-small](https://huggingface.co/google/mt5-small)
- source_prefix: `"summarize: "`
- batch size: 3 (6)
- max_source_length: 800
- max_target_length: 96
- warmup_ratio: 0.3
- number of train epochs: 10
- gradient accumulation steps: 2
- learning rate: 5e-5
## Datasets and Preprocessing
The datasets were preprocessed as follows:
The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected.
This model is trained on the following dataset:
| Name | Language | Size | License
|------|----------|------|--------
| [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | 84,564 | Concrete license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html).
We have permission to use the Swisstext dataset and release the resulting summarization model under MIT license (see [permission-declaration-swisstext.pdf](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/resolve/main/permission-declaration-swisstext.pdf)).
## Evaluation on MLSUM German Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| deutsche-telekom/mt5-small-sum-de-mit-v1 (this) | 16.8023 | 3.5531 | 12.6884 | 14.7624
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
| **[deutsche-telekom/mt5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1)** | **21.7336** | **7.2614** | **17.1323** | **19.3977**
## License
Copyright (c) 2021 Philip May, Deutsche Telekom AG
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/blob/main/LICENSE) in the repository.
|
Wanfq/MAKER-mwoz-condensed-kb-t5-large
|
Wanfq
| 2023-05-18T04:45:05Z | 0 | 0 | null |
[
"conversational",
"en",
"arxiv:2305.10149",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-17T05:45:15Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Wanfq/MAKER-mwoz-condensed-kb-t5-base
|
Wanfq
| 2023-05-18T04:44:47Z | 0 | 0 | null |
[
"conversational",
"en",
"arxiv:2305.10149",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-17T05:44:54Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Wanfq/MAKER-camrest-full-kb-t5-large
|
Wanfq
| 2023-05-18T04:44:20Z | 0 | 0 | null |
[
"conversational",
"en",
"arxiv:2305.10149",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-17T03:47:14Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Wanfq/MAKER-camrest-full-kb-t5-base
|
Wanfq
| 2023-05-18T04:43:57Z | 0 | 0 | null |
[
"conversational",
"en",
"arxiv:2305.10149",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-17T03:46:58Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Wanfq/MAKER-mwoz-full-kb-t5-large
|
Wanfq
| 2023-05-18T04:42:44Z | 0 | 0 | null |
[
"conversational",
"en",
"arxiv:2305.10149",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-17T03:00:01Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
Wanfq/MAKER-mwoz-full-kb-t5-base
|
Wanfq
| 2023-05-18T04:42:16Z | 0 | 0 | null |
[
"conversational",
"en",
"arxiv:2305.10149",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-17T02:39:43Z |
---
license: apache-2.0
language:
- en
pipeline_tag: conversational
---
This is a repo of the models of [Multi-Grained Knowledge Retrieval for End-to-End Task-Oriented Dialog](https://arxiv.org/abs/2305.10149), a paper in **ACL 2023**. For more details about the models, please refer to our [github repo](https://github.com/18907305772/MAKER).
|
asenella/mmnist_JNFDccaconfig2_seed_1_ratio_0_c
|
asenella
| 2023-05-18T03:15:59Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-18T03:15:52Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
yyyynnnniiii/Trainer_Albert_2023-05-18
|
yyyynnnniiii
| 2023-05-18T02:58:01Z | 0 | 0 | null |
[
"finance",
"text-classification",
"en",
"dataset:yyyynnnniiii/WSJ_0518",
"region:us"
] |
text-classification
| 2023-05-18T02:36:58Z |
---
datasets:
- yyyynnnniiii/WSJ_0518
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- finance
---
|
SHENMU007/neunit_test
|
SHENMU007
| 2023-05-18T02:55:25Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-05-17T04:02:09Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
|
AbdulHafiz9940/t5-small-finetuned-test1
|
AbdulHafiz9940
| 2023-05-18T02:49:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-17T08:47:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-test1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2837
- Rouge1: 22.7012
- Rouge2: 0.0
- Rougel: 22.7156
- Rougelsum: 22.7348
- Gen Len: 2.2686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.5353 | 1.0 | 2601 | 2.3131 | 22.0732 | 0.0 | 22.1069 | 22.1229 | 2.2647 |
| 2.4728 | 2.0 | 5202 | 2.2838 | 22.7012 | 0.0 | 22.7156 | 22.7348 | 2.2686 |
| 2.4819 | 3.0 | 7803 | 2.2837 | 22.7012 | 0.0 | 22.7156 | 22.7348 | 2.2686 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AlexC98/commitRoBerta_
|
AlexC98
| 2023-05-18T01:34:52Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-17T23:44:47Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Vodolay/oldbooks-lora
|
Vodolay
| 2023-05-18T01:23:13Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-18T00:34:16Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Vodolay/oldbooks-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the gigant/oldbookillustrations dataset. You can find some example images in the following.




|
cyberagent/open-calm-large
|
cyberagent
| 2023-05-18T01:11:13Z | 2,142 | 10 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-15T06:50:24Z |
---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-Large
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-large", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-large")
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): 本モデルは、株式会社サイバーエージェントによるOpenCALM-XXをファインチューニングしたものです。元のモデルはCC BY-SA 4.0ライセンスのもとで公開されており、本モデルも同じくCC BY-SA 4.0ライセンスで公開します。詳しくはこちらをご覧ください: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
```
|
cyberagent/open-calm-medium
|
cyberagent
| 2023-05-18T01:10:54Z | 283 | 4 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"japanese",
"causal-lm",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:mc4",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-05-15T06:44:47Z |
---
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
- mc4
language:
- ja
tags:
- japanese
- causal-lm
inference: false
---
# OpenCALM-Medium
## Model Description
OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-medium", device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-medium")
inputs = tokenizer("AIによって私達の暮らしは、", return_tensors="pt").to(model.device)
with torch.no_grad():
tokens = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.05,
pad_token_id=tokenizer.pad_token_id,
)
output = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(output)
```
## Model Details
|Model|Params|Layers|Dim|Heads|Dev ppl|
|:---:|:---: |:---:|:---:|:---:|:---:|
|[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7|
|[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8|
|[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3|
|[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3|
|[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7|
|[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2|
* **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/)
* **Model type**: Transformer-based Language Model
* **Language**: Japanese
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
* **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc.
* Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/
* Example (ja): 本モデルは、株式会社サイバーエージェントによるOpenCALM-XXをファインチューニングしたものです。元のモデルはCC BY-SA 4.0ライセンスのもとで公開されており、本モデルも同じくCC BY-SA 4.0ライセンスで公開します。詳しくはこちらをご覧ください: https://creativecommons.org/licenses/by-sa/4.0/
## Training Dataset
* Wikipedia (ja)
* Common Crawl (ja)
## Author
[Ryosuke Ishigami](https://huggingface.co/rishigami)
## Citations
```bibtext
@software{gpt-neox-library,
title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
url = {https://www.github.com/eleutherai/gpt-neox},
doi = {10.5281/zenodo.5879544},
month = {8},
year = {2021},
version = {0.0.1},
}
```
|
REDSCARE/RS2281
|
REDSCARE
| 2023-05-18T01:06:09Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"chemistry",
"en",
"es",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:other",
"region:us"
] | null | 2023-05-18T01:04:34Z |
---
license: other
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
- es
metrics:
- accuracy
library_name: adapter-transformers
tags:
- chemistry
---
|
yarak001/distilbert-base-uncased-finetuned-emotion
|
yarak001
| 2023-05-18T01:03:56Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T00:28:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225635095680048
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9225
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8134 | 1.0 | 250 | 0.3127 | 0.903 | 0.9000 |
| 0.247 | 2.0 | 500 | 0.2207 | 0.9225 | 0.9226 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ItchyB/ppo-LunarLander-v2
|
ItchyB
| 2023-05-18T00:44:56Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-20T00:54:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 294.62 +/- 17.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Toffee0705/ppo-Huggy
|
Toffee0705
| 2023-05-18T00:37:30Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-18T00:36:54Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Toffee0705/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pkuong/distilbert_classifier_newsgroups
|
pkuong
| 2023-05-18T00:22:29Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T00:22:11Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Hex820000/StoriesToon_v1
|
Hex820000
| 2023-05-18T00:15:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-17T23:46:39Z |
---
license: creativeml-openrail-m
---
|
pratikcha/DummyModelTest
|
pratikcha
| 2023-05-17T23:50:16Z | 0 | 0 | null |
[
"code",
"en",
"region:us"
] | null | 2023-05-17T23:49:33Z |
---
language:
- en
tags:
- code
---
|
Abhinav2499/gpt2-token-class
|
Abhinav2499
| 2023-05-17T23:48:02Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-14T02:47:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: gpt2-token-class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-token-class
This model is a fine-tuned version of [Jean-Baptiste/roberta-large-ner-english](https://huggingface.co/Jean-Baptiste/roberta-large-ner-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4239
- Precision: 0.8559
- Recall: 0.7666
- F1: 0.8020
- Accuracy: 0.9193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2451 | 1.0 | 1796 | 0.2658 | 0.8781 | 0.6962 | 0.7480 | 0.9099 |
| 0.1938 | 2.0 | 3592 | 0.2473 | 0.8683 | 0.7312 | 0.7778 | 0.9153 |
| 0.1452 | 3.0 | 5388 | 0.2614 | 0.8525 | 0.7588 | 0.7953 | 0.9172 |
| 0.1068 | 4.0 | 7184 | 0.3033 | 0.8491 | 0.7584 | 0.7940 | 0.9164 |
| 0.0792 | 5.0 | 8980 | 0.3507 | 0.8612 | 0.7586 | 0.7978 | 0.9190 |
| 0.0597 | 6.0 | 10776 | 0.3924 | 0.8569 | 0.7632 | 0.7999 | 0.9189 |
| 0.0479 | 7.0 | 12572 | 0.4239 | 0.8559 | 0.7666 | 0.8020 | 0.9193 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/mmnist_JNFconfig2_seed_3_ratio_05_c
|
asenella
| 2023-05-17T23:35:14Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T23:34:59Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/mmnist_JNFconfig2_seed_2_ratio_05_c
|
asenella
| 2023-05-17T23:34:35Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T23:34:21Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
LoganDark/rwkv-4-raven-ggml
|
LoganDark
| 2023-05-17T23:27:57Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T23:27:57Z |
---
license: apache-2.0
---
[Use the master branch.](https://huggingface.co/LoganDark/rwkv-4-raven-ggml/tree/master) HuggingFace won't let me set the default, sorry.
|
ernieg/setfit-beauty-multilabel-example
|
ernieg
| 2023-05-17T23:04:15Z | 3 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-17T23:03:25Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# ernieg/setfit-beauty-multilabel-example
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("ernieg/setfit-beauty-multilabel-example")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Ktang2k/poca-SoccerTwos
|
Ktang2k
| 2023-05-17T22:59:55Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-05-17T22:59:49Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: Ktang2k/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
asenella/mmnist_JNFconfig2_seed_1_ratio_05_c
|
asenella
| 2023-05-17T22:55:15Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T22:54:32Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
1darkneto8/sdwebui2
|
1darkneto8
| 2023-05-17T22:35:24Z | 0 | 0 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-05-17T21:52:26Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
roneneldan/TinyStories-3M
|
roneneldan
| 2023-05-17T22:11:46Z | 3,446 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"arxiv:2305.07759",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-12T21:46:51Z |
Model trained on the TinyStories Dataset, see https://arxiv.org/abs/2305.07759
------ EXAMPLE USAGE ---
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model = AutoModelForCausalLM.from_pretrained('roneneldan/TinyStories-3M')
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
prompt = "Once upon a time there was"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Generate completion
output = model.generate(input_ids, max_length = 1000, num_beams=1)
# Decode the completion
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
# Print the generated text
print(output_text)
|
benlehrburger/modern-architecture-32
|
benlehrburger
| 2023-05-17T21:50:58Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"dataset:benlehrburger/architecture",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-05-17T21:38:56Z |
---
datasets:
- benlehrburger/architecture
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Low-poly architecture image generation
This model is a diffusion model for unconditional image generation of modern architecture.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('{hub_model_id}')
image = pipeline().images[0]
image
```
|
asenella/mmnist_JNFconfig2_seed_3_ratio_02_c
|
asenella
| 2023-05-17T21:47:53Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T21:47:39Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.