modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
perrycision/ppo-LunarLander-v2 | perrycision | 2023-06-06T15:46:14Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T15:45:54Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.20 +/- 16.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ataunal/pc1 | ataunal | 2023-06-06T15:42:31Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T11:08:15Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pc1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 59.80 +/- 42.33
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
thiendio/reinforce-copter-env-v1 | thiendio | 2023-06-06T15:23:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T15:22:58Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-copter-env-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.50 +/- 17.92
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
OctoberTechnology/sdtest | OctoberTechnology | 2023-06-06T15:01:53Z | 48 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-06-06T14:59:44Z | # Chill Watcher
consider deploy on:
- huggingface inference point
- replicate api
- lightning.ai
# Deploy Guide(Chinese)
https://www.bilibili.com/video/BV14V4y167m7
# platform comparison
> all support autoscaling
|platform|prediction speed|charges|deploy handiness|
|-|-|-|-|
|huggingface|fast:20s|high:$0.6/hr (without autoscaling)|easy:git push|
|replicate|fast if used frequently: 30s, slow if needs initialization: 5min|low: $0.02 per generation|difficult: build image and upload|
|lightning.ai|fast with app running: 20s, slow if idle: XXs|low: free $30 per month, $0.18 per init, $0.02 per run|easy: one command|
# platform deploy options
## huggingface
> [docs](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
- requirements: use pip packages in `requirements.txt`
- `init()` and `predict()` function: use `handler.py`, implement the `EndpointHandler` class
- more: modify `handler.py` for requests and inference and explore more highly-customized features
- deploy: git (lfs) push to huggingface repository(the whole directory including models and weights, etc.), and use inference endpoints to deploy. Click and deploy automaticly, very simple.
- call api: use the url provide by inference endpoints after endpoint is ready(build, initialize and in a "running" state), make a post request to the url using request schema definied in the `handler.py`
## replicate
> [docs](https://replicate.com/docs/guides/push-a-model)
- requirements: specify all requirements(pip packages, system packages, python version, cuda, etc.) in `cog.yaml`
- `init()` and `predict()` function: use `predict.py`, implement the `Predictor` class
- more: modify `predict.py`
- deploy:
1. get a linux GPU machine with 60GB disk space;
2. install [cog](https://replicate.com/docs/guides/push-a-model) and [docker](https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository)
3. `git pull` the current repository from huggingface, including large model files
4. after `predict.py` and `cog.yaml` is correctly coded, run `cog login`, `cog push`, then cog will build a docker image locally and push the image to replicate. As the image could take 30GB or so disk space, it would cost a lot network bandwidth.
- call api: if everything runs successfully and the docker image is pushed to replicate, you will see a web-ui and an API example directly in your replicate repository
## lightning.ai
> docs: [code](https://lightning.ai/docs/app/stable/levels/basic/real_lightning_component_implementations.html), [deploy](https://lightning.ai/docs/app/stable/workflows/run_app_on_cloud/)
- requirements:
- pip packages are listed in `requirements_lightning.txt`, because some requirements are different from those in huggingface. Rename it to `requirements.txt`
- other pip packages, system packages and some big model weight files download commands, can be listed using a custom build config. Checkout `class CustomBuildConfig(BuildConfig)` in `app.py`. In a custom build config you can use many linux commands such as `wget` and `sudo apt-get update`. The custom build config will be executed on the `__init__()` of the `PythonServer` class
- `init()` and `predict()` function: use `app.py`, implement the `PythonServer` class. Note:
- some packages haven't been installed when the file is called(these packages may be installed when `__init__()` is called), so some import code should be in the function, not at the top of the file, or you may get import errors.
- you can't save your own value to `PythonServer.self` unless it's predifined in the variables, so don't assign any self-defined variables to `self`
- if you use the custom build config, you should implement `PythonServer`'s `__init()__` yourself, so don't forget to use the correct function signature
- more: ...
- deploy:
- `pip install lightning`
- prepare the directory on your local computer(no need to have a GPU)
- list big files in the `.lightningignore` file to avoid big file upload and save deploy time cost
- run `lightning run app app.py --cloud` in the local terminal, and it will upload the files in the directory to lightning cloud, and start deploying on the cloud
- check error logs on the web-ui, use `all logs`
- call api: only if the app starts successfully, you can see a valid url in the `settings` page of the web-ui. Open that url, and you can see the api
### some stackoverflow:
install docker:
- https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
install git-lfs:
- https://github.com/git-lfs/git-lfs/blob/main/INSTALLING.md
linux:
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
```
---
license: apache-2.0
---
|
ying-zh/ppo-LunarLander-v2-torch | ying-zh | 2023-06-06T15:01:04Z | 0 | 0 | deep-rl-course | [
"deep-rl-course",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"ppo",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T14:30:33Z | ---
library_name: deep-rl-course
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- ppo
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 117.88 +/- 53.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [ppo library](https://github.com/yingzha/ppo). |
CeroShrijver/chinese-roberta-wwm-ext-text-classification | CeroShrijver | 2023-06-06T14:49:34Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T14:33:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chinese-roberta-wwm-ext-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-roberta-wwm-ext-text-classification
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7045
- Accuracy: 0.7744
## Model description
Test Accuracy: 0.8254
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4342 | 1.0 | 1009 | 0.5256 | 0.7835 |
| 0.3493 | 2.0 | 2018 | 0.5649 | 0.7805 |
| 0.1857 | 3.0 | 3027 | 0.7045 | 0.7744 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
elgamous/xx_pipeline | elgamous | 2023-06-06T14:37:02Z | 1 | 0 | spacy | [
"spacy",
"token-classification",
"multilingual",
"model-index",
"region:us"
] | token-classification | 2023-06-06T14:36:39Z | ---
tags:
- spacy
- token-classification
language:
- multilingual
model-index:
- name: xx_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9947643979
- name: NER Recall
type: recall
value: 1.0
- name: NER F Score
type: f_score
value: 0.9973753281
---
| Feature | Description |
| --- | --- |
| **Name** | `xx_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.3,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BRAND`, `CITY`, `COUNTRY`, `COUNTY`, `LOC`, `ORG`, `PERSON` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 99.74 |
| `ENTS_P` | 99.48 |
| `ENTS_R` | 100.00 |
| `TOK2VEC_LOSS` | 176.40 |
| `NER_LOSS` | 352.08 | |
gokuls/hBERTv2_new_pretrain_w_init__stsb | gokuls | 2023-06-06T14:24:14Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T14:15:59Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv2_new_pretrain_w_init__stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.3669953973916525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init__stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0270
- Pearson: 0.3743
- Spearmanr: 0.3670
- Combined Score: 0.3707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.2654 | 1.0 | 45 | 2.4836 | 0.2041 | 0.1912 | 0.1976 |
| 1.9657 | 2.0 | 90 | 2.1138 | 0.2744 | 0.2547 | 0.2646 |
| 1.6665 | 3.0 | 135 | 2.2375 | 0.3087 | 0.3002 | 0.3044 |
| 1.3265 | 4.0 | 180 | 2.0270 | 0.3743 | 0.3670 | 0.3707 |
| 1.0731 | 5.0 | 225 | 2.3748 | 0.3294 | 0.3212 | 0.3253 |
| 0.7974 | 6.0 | 270 | 2.6753 | 0.3338 | 0.3353 | 0.3345 |
| 0.6738 | 7.0 | 315 | 2.5125 | 0.3590 | 0.3464 | 0.3527 |
| 0.5384 | 8.0 | 360 | 2.3740 | 0.3310 | 0.3211 | 0.3261 |
| 0.4589 | 9.0 | 405 | 2.3911 | 0.3709 | 0.3690 | 0.3699 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CeroShrijver/xlm-roberta-base-text-classification | CeroShrijver | 2023-06-06T14:24:01Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T14:10:13Z | ---
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-text-classification
This model was trained from scratch on the None dataset.
## Model description
Test Accuracy: 0.8067
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
crowbarmassage/q-Taxi-v3 | crowbarmassage | 2023-06-06T14:13:29Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-19T19:24:38Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="crowbarmassage/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gokuls/hBERTv2_new_pretrain_w_init__qqp | gokuls | 2023-06-06T14:10:56Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T07:49:05Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_w_init__qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8042542666336878
- name: F1
type: f1
value: 0.7431353456669914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init__qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4228
- Accuracy: 0.8043
- F1: 0.7431
- Combined Score: 0.7737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6243 | 1.0 | 2843 | 0.5630 | 0.7026 | 0.6300 | 0.6663 |
| 0.5301 | 2.0 | 5686 | 0.5110 | 0.7516 | 0.6346 | 0.6931 |
| 0.4804 | 3.0 | 8529 | 0.4928 | 0.7635 | 0.6780 | 0.7208 |
| 0.4419 | 4.0 | 11372 | 0.4610 | 0.7756 | 0.7173 | 0.7465 |
| 0.4105 | 5.0 | 14215 | 0.4441 | 0.7889 | 0.7347 | 0.7618 |
| 0.3819 | 6.0 | 17058 | 0.4336 | 0.8018 | 0.7207 | 0.7613 |
| 0.3534 | 7.0 | 19901 | 0.4228 | 0.8043 | 0.7431 | 0.7737 |
| 0.33 | 8.0 | 22744 | 0.4429 | 0.8062 | 0.7445 | 0.7754 |
| 0.3098 | 9.0 | 25587 | 0.4296 | 0.8104 | 0.7511 | 0.7807 |
| 0.2912 | 10.0 | 28430 | 0.4386 | 0.8086 | 0.7554 | 0.7820 |
| 0.275 | 11.0 | 31273 | 0.4551 | 0.8143 | 0.7575 | 0.7859 |
| 0.2575 | 12.0 | 34116 | 0.4742 | 0.8160 | 0.7491 | 0.7825 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_w_init_48_qqp | gokuls | 2023-06-06T13:43:59Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T08:20:55Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_w_init_48_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8218896858768241
- name: F1
type: f1
value: 0.7658287535364704
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4082
- Accuracy: 0.8219
- F1: 0.7658
- Combined Score: 0.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5585 | 1.0 | 2843 | 0.5073 | 0.7522 | 0.6429 | 0.6976 |
| 0.4735 | 2.0 | 5686 | 0.4584 | 0.7848 | 0.6963 | 0.7405 |
| 0.4044 | 3.0 | 8529 | 0.4140 | 0.8074 | 0.7234 | 0.7654 |
| 0.3583 | 4.0 | 11372 | 0.4206 | 0.8058 | 0.7602 | 0.7830 |
| 0.3271 | 5.0 | 14215 | 0.4082 | 0.8219 | 0.7658 | 0.7939 |
| 0.2987 | 6.0 | 17058 | 0.4203 | 0.8177 | 0.7666 | 0.7921 |
| 0.3287 | 7.0 | 19901 | 0.4641 | 0.8124 | 0.7209 | 0.7667 |
| 0.3594 | 8.0 | 22744 | 0.4493 | 0.8010 | 0.7246 | 0.7628 |
| 0.3729 | 9.0 | 25587 | 0.4443 | 0.8047 | 0.7388 | 0.7718 |
| 0.3314 | 10.0 | 28430 | 0.4196 | 0.8132 | 0.7411 | 0.7771 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_w_init_48_wnli | gokuls | 2023-06-06T13:41:18Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T13:37:05Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init_48_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6860
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.852 | 1.0 | 5 | 0.6860 | 0.5634 |
| 0.7576 | 2.0 | 10 | 0.6909 | 0.5634 |
| 0.7506 | 3.0 | 15 | 0.7317 | 0.5634 |
| 0.7746 | 4.0 | 20 | 0.7648 | 0.4366 |
| 0.7363 | 5.0 | 25 | 0.6876 | 0.5634 |
| 0.7133 | 6.0 | 30 | 0.7003 | 0.4366 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hangeol/1000 | hangeol | 2023-06-06T13:37:03Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-06T12:41:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - hangeol/1000
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
gokuls/hBERTv1_new_pretrain_w_init_48_stsb | gokuls | 2023-06-06T13:36:44Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T13:23:41Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv1_new_pretrain_w_init_48_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.7471924680940966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9800
- Pearson: 0.7515
- Spearmanr: 0.7472
- Combined Score: 0.7493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.5456 | 1.0 | 45 | 2.2706 | 0.1246 | 0.1141 | 0.1194 |
| 2.0514 | 2.0 | 90 | 2.0613 | 0.5266 | 0.5198 | 0.5232 |
| 1.3837 | 3.0 | 135 | 1.1984 | 0.6853 | 0.6942 | 0.6897 |
| 1.0297 | 4.0 | 180 | 1.6176 | 0.6869 | 0.6961 | 0.6915 |
| 0.8064 | 5.0 | 225 | 1.1444 | 0.7476 | 0.7445 | 0.7460 |
| 0.604 | 6.0 | 270 | 1.2754 | 0.7422 | 0.7450 | 0.7436 |
| 0.4818 | 7.0 | 315 | 1.1407 | 0.7687 | 0.7673 | 0.7680 |
| 0.3905 | 8.0 | 360 | 1.1860 | 0.7560 | 0.7604 | 0.7582 |
| 0.3476 | 9.0 | 405 | 0.9800 | 0.7515 | 0.7472 | 0.7493 |
| 0.2819 | 10.0 | 450 | 1.0156 | 0.7521 | 0.7507 | 0.7514 |
| 0.2418 | 11.0 | 495 | 1.0174 | 0.7516 | 0.7480 | 0.7498 |
| 0.2068 | 12.0 | 540 | 1.2367 | 0.7530 | 0.7523 | 0.7527 |
| 0.1863 | 13.0 | 585 | 1.0073 | 0.7491 | 0.7468 | 0.7480 |
| 0.1929 | 14.0 | 630 | 1.0470 | 0.7517 | 0.7505 | 0.7511 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CeroShrijver/glm-large-chinese-text-classification | CeroShrijver | 2023-06-06T13:35:00Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"glm",
"text-classification",
"generated_from_trainer",
"custom_code",
"autotrain_compatible",
"region:us"
] | text-classification | 2023-06-06T05:12:28Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: glm-large-chinese-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glm-large-chinese-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6618
- Accuracy: 0.7705
Stil have test bugs!
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5565 | 1.0 | 1009 | 0.4575 | 0.8052 |
| 0.4498 | 2.0 | 2018 | 0.5336 | 0.7800 |
| 0.1593 | 3.0 | 3027 | 0.6618 | 0.7705 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nolanaatama/prfctwrld | nolanaatama | 2023-06-06T13:34:54Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-06T11:37:37Z | ---
license: creativeml-openrail-m
---
|
CalmScout/sd-class-butterflies-64 | CalmScout | 2023-06-06T13:31:07Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-06-06T13:30:36Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('CalmScout/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
wootwoot/anything-v4.0-vae | wootwoot | 2023-06-06T13:23:27Z | 10 | 1 | diffusers | [
"diffusers",
"safetensors",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T13:18:05Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
---
### From [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0)
All credits go to the original author and all the author of AnythingV4's ancestor models
### Diffusers
AnythingV4's vae compatible with the [🧨Diffusers library](https://github.com/huggingface/diffusers) |
gokuls/hBERTv1_new_pretrain_w_init_48_rte | gokuls | 2023-06-06T13:23:24Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T13:15:08Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init_48_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_rte
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6910
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.712 | 1.0 | 20 | 0.7158 | 0.4729 |
| 0.7094 | 2.0 | 40 | 0.6958 | 0.4729 |
| 0.7025 | 3.0 | 60 | 0.7008 | 0.4729 |
| 0.705 | 4.0 | 80 | 0.6919 | 0.5271 |
| 0.7023 | 5.0 | 100 | 0.6960 | 0.5271 |
| 0.7002 | 6.0 | 120 | 0.7095 | 0.4729 |
| 0.7071 | 7.0 | 140 | 0.7040 | 0.4729 |
| 0.6982 | 8.0 | 160 | 0.6918 | 0.5271 |
| 0.7025 | 9.0 | 180 | 0.6910 | 0.5271 |
| 0.6965 | 10.0 | 200 | 0.6984 | 0.4621 |
| 0.6814 | 11.0 | 220 | 0.7635 | 0.4946 |
| 0.6616 | 12.0 | 240 | 0.6918 | 0.5271 |
| 0.6658 | 13.0 | 260 | 0.7622 | 0.5307 |
| 0.6316 | 14.0 | 280 | 0.8002 | 0.5090 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Venkatesh4342/xlm-roberta-base-NER | Venkatesh4342 | 2023-06-06T13:21:26Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-06T11:28:05Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-NER
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1431
- F1: 0.8130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 2263 | 0.1465 | 0.7947 |
| No log | 2.0 | 4527 | 0.1393 | 0.8064 |
| 0.1402 | 3.0 | 6791 | 0.1408 | 0.8083 |
| 0.1402 | 4.0 | 9052 | 0.1431 | 0.8130 |
### Framework versions
- Transformers 4.27.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_w_init__stsb | gokuls | 2023-06-06T13:18:15Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T16:46:28Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: hBERTv1_new_pretrain_w_init__stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
config: stsb
split: validation
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.08916919703003628
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init__stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2584
- Pearson: 0.0949
- Spearmanr: 0.0892
- Combined Score: 0.0920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.5056 | 1.0 | 45 | 2.2584 | 0.0949 | 0.0892 | 0.0920 |
| 2.1254 | 2.0 | 90 | 2.6871 | 0.1250 | 0.1231 | 0.1241 |
| 1.9839 | 3.0 | 135 | 2.2709 | 0.1790 | 0.1840 | 0.1815 |
| 1.6299 | 4.0 | 180 | 2.5115 | 0.2691 | 0.2797 | 0.2744 |
| 1.3155 | 5.0 | 225 | 2.4555 | 0.3453 | 0.3437 | 0.3445 |
| 0.9686 | 6.0 | 270 | 2.8004 | 0.4571 | 0.4406 | 0.4489 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_w_init_48_qqp | gokuls | 2023-06-06T13:14:47Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T08:45:53Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_w_init_48_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8430373485035865
- name: F1
type: f1
value: 0.7845307619176966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3476
- Accuracy: 0.8430
- F1: 0.7845
- Combined Score: 0.8138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.4637 | 1.0 | 2843 | 0.3907 | 0.8136 | 0.7636 | 0.7886 |
| 0.363 | 2.0 | 5686 | 0.3536 | 0.8338 | 0.7900 | 0.8119 |
| 0.3211 | 3.0 | 8529 | 0.3476 | 0.8430 | 0.7845 | 0.8138 |
| 0.2906 | 4.0 | 11372 | 0.3539 | 0.8531 | 0.8059 | 0.8295 |
| 0.2603 | 5.0 | 14215 | 0.3531 | 0.8531 | 0.8017 | 0.8274 |
| 0.2373 | 6.0 | 17058 | 0.3716 | 0.8561 | 0.8089 | 0.8325 |
| 0.2175 | 7.0 | 19901 | 0.3553 | 0.8565 | 0.8123 | 0.8344 |
| 0.1957 | 8.0 | 22744 | 0.3726 | 0.8551 | 0.8099 | 0.8325 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tmpusr/Reinforce-CartPole-v1 | tmpusr | 2023-06-06T13:13:07Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T13:12:59Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
casque/DakiV4-10 | casque | 2023-06-06T13:11:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T13:10:00Z | ---
license: creativeml-openrail-m
---
|
birdfoot/ppo-LunarLander-v2 | birdfoot | 2023-06-06T13:10:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T13:10:15Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.97 +/- 21.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
opxhere/AlixaKoreanV3 | opxhere | 2023-06-06T13:09:51Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T13:04:02Z | ---
license: creativeml-openrail-m
---
|
gokuls/hBERTv1_new_pretrain_w_init__rte | gokuls | 2023-06-06T13:09:49Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T16:41:06Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init__rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init__rte
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6916
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7478 | 1.0 | 20 | 0.6921 | 0.5271 |
| 0.7195 | 2.0 | 40 | 0.6916 | 0.5271 |
| 0.7087 | 3.0 | 60 | 0.6945 | 0.5271 |
| 0.7025 | 4.0 | 80 | 0.6917 | 0.5379 |
| 0.721 | 5.0 | 100 | 0.6924 | 0.5379 |
| 0.6992 | 6.0 | 120 | 0.7302 | 0.4621 |
| 0.685 | 7.0 | 140 | 0.7124 | 0.5379 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CalmScout/sd-class-butterflies-32 | CalmScout | 2023-06-06T13:07:29Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-06-06T13:06:58Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('CalmScout/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Flynews/ppo-LunarLander | Flynews | 2023-06-06T13:03:47Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T11:41:30Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -49.15 +/- 119.77
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Flynews/ppo-LunarLander'
'batch_size': 512
'minibatch_size': 128}
```
|
optimum/roberta-base-squad2-neuronx | optimum | 2023-06-06T13:03:22Z | 3 | 0 | transformers | [
"transformers",
"roberta",
"question-answering",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-06T12:57:18Z | ---
license: cc-by-4.0
---
This repo contains artifacts from [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2/tree/main) but in neuronx format compatible with INF2 and TRN1 devices.
|
gokuls/hBERTv1_new_pretrain_w_init__qqp | gokuls | 2023-06-06T13:03:13Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T11:48:10Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_w_init__qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8135295572594607
- name: F1
type: f1
value: 0.7339332980412917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init__qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3996
- Accuracy: 0.8135
- F1: 0.7339
- Combined Score: 0.7737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5075 | 1.0 | 2843 | 0.4451 | 0.7864 | 0.7172 | 0.7518 |
| 0.4118 | 2.0 | 5686 | 0.4144 | 0.8052 | 0.7377 | 0.7715 |
| 0.3583 | 3.0 | 8529 | 0.3996 | 0.8135 | 0.7339 | 0.7737 |
| 0.3174 | 4.0 | 11372 | 0.4160 | 0.8195 | 0.7566 | 0.7880 |
| 0.2918 | 5.0 | 14215 | 0.4424 | 0.8142 | 0.7633 | 0.7888 |
| 0.2769 | 6.0 | 17058 | 0.4765 | 0.8195 | 0.7583 | 0.7889 |
| 0.2576 | 7.0 | 19901 | 0.4033 | 0.8237 | 0.7675 | 0.7956 |
| 0.2327 | 8.0 | 22744 | 0.4414 | 0.8279 | 0.7682 | 0.7981 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
optimum/legal-bert-base-uncased-neuron | optimum | 2023-06-06T12:48:52Z | 1 | 0 | transformers | [
"transformers",
"bert",
"pretraining",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-06T12:41:47Z | ---
license: cc-by-sa-4.0
---
This repo contains artifacts from `nlpaueb/legal-bert-base-uncased` in Neuron format compatible with Inferentia 1.
|
TheBloke/Selfee-13B-fp16 | TheBloke | 2023-06-06T12:41:42Z | 14 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-06T09:59:51Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B GGML
This repo contains fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
It is the result of merging the diff at the above repo with base Llama 13B, then converting fp32 to fp16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaist AI's Selfee 13B
<p align="center" width="100%">
<a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
</p>
# SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black)
## News
[May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details.
## Overview
This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents:
- The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)).
- The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)).
- The training process for the model ([detail](#training), [code](train)).
- The inference process for the model ([detail](#inference), [code](inference)).
- The evaluation method and dataset ([detail](#evaluation), [code](evaluation)).
This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌
**We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.**
## Data Release
For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question.
For other datsets, we do not need special data collection method.
## Data Generation Process
To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br>
Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br>
Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br>
Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair.
We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br>
You can see the details with command [here](data_augmentation/README.md).<br>
*We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them!
## Training
We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br>
To reproduce the training procedure, here are the steps. <br>
```
pip install -r requirements.txt
```
```
torchrun --nproc_per_node=4 train/train_mem.py \
--model_name_or_path llama-7b \
--data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \
--bf16 True \
--output_dir ckpt/selfee-7b \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "shard_grad_op auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--training_objective full \
```
The hyperparameters are as follows, following Vicuna and Alpaca.
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 |
## Inference
<b>Restoring checkpoint using diff</b><br>
We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command
```
python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta
```
<b>Autonomous Inference Mode</b><br>
Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br>
For autonomous inference mode,
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl"
```
<b>Revision Enforce Inference Mode</b><br>
We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer.
For revision enforce inference mode, use the `max-num-revision` argument.
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3
```
## Evaluation
Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br>
We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br>
### GPT-4 Automatic Evaluation
First, you need to get your API key to get access to the GPT-4 API.
```
export OPENAI_API_KEYS={personal_key}
```
To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`),
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl
```
To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed.
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl
```
## Limitations
Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt.
## Online demo
Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>!
#### How to launch the demo yourself
To serve the web demo yourself, run the following commands:
1. Run the controller
```
python3 -m serve.controller
```
2. Run the model worker
```
python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b
```
3. Run the web server
```
python3 -m serve.gradio_web_server --share
```
You can find the serving code [here](serve).
### Team members
<a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/>
(* denotes equal contribution)
### Release
We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface.
### License
The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us.
### Citation
Please cite if you use the data or code in this repo.
```
@misc{selfee2023,
author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon},
title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation},
url = {https://kaistai.github.io/SelFee/},
month = {May},
year = {2023},
howpublished = {Blog post}
}
```
|
gokuls/hBERTv2_new_pretrain_48_wnli | gokuls | 2023-06-06T12:40:27Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T12:36:21Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_48_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6839
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9503 | 1.0 | 5 | 0.6839 | 0.5634 |
| 0.7089 | 2.0 | 10 | 0.6877 | 0.5634 |
| 0.7066 | 3.0 | 15 | 0.6858 | 0.5634 |
| 0.7051 | 4.0 | 20 | 0.6943 | 0.4789 |
| 0.6996 | 5.0 | 25 | 0.7125 | 0.4366 |
| 0.7088 | 6.0 | 30 | 0.6890 | 0.5634 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
onedapperterm/LF6_Token_Classifier | onedapperterm | 2023-06-06T12:37:28Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-06T11:33:37Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: LF6_Token_Classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LF6_Token_Classifier
This model is a fine-tuned version of [dbmdz/bert-base-german-cased](https://huggingface.co/dbmdz/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.0468 | 1.0 | 601 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 2.0 | 1202 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 3.0 | 1803 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rohitp1/pratyush_whisper_small_distil_libri360_enc_6_dec_4_batch_4_epoch_2_try2 | rohitp1 | 2023-06-06T12:29:09Z | 78 | 1 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-04T07:01:34Z | ---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: pratyush_whisper_small_distil_libri360_enc_6_dec_4_batch_4_epoch_2_try2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pratyush_whisper_small_distil_libri360_enc_6_dec_4_batch_4_epoch_2_try2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7423
- Wer: 9.7882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 2.8038 | 0.25 | 50 | 2.7934 | 11.8801 |
| 2.8683 | 0.49 | 100 | 2.8112 | 13.4039 |
| 2.8987 | 0.74 | 150 | 2.8008 | 11.8782 |
| 2.884 | 0.98 | 200 | 2.7877 | 11.1632 |
| 2.8539 | 1.23 | 250 | 2.7721 | 10.6549 |
| 2.8348 | 1.48 | 300 | 2.7557 | 10.3250 |
| 2.8261 | 1.72 | 350 | 2.7522 | 10.1403 |
| 2.8161 | 1.97 | 400 | 2.7423 | 9.7882 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
gokuls/hBERTv2_new_pretrain_wnli | gokuls | 2023-06-06T12:22:39Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T17:25:36Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_wnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE WNLI
type: glue
config: wnli
split: validation
args: wnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5633802816901409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_wnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new) on the GLUE WNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6857
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8646 | 1.0 | 5 | 0.7422 | 0.4366 |
| 0.7094 | 2.0 | 10 | 0.7290 | 0.4366 |
| 0.7047 | 3.0 | 15 | 0.7053 | 0.5634 |
| 0.7203 | 4.0 | 20 | 0.7022 | 0.4366 |
| 0.7 | 5.0 | 25 | 0.6977 | 0.4366 |
| 0.7098 | 6.0 | 30 | 0.6885 | 0.5634 |
| 0.695 | 7.0 | 35 | 0.7045 | 0.4366 |
| 0.7053 | 8.0 | 40 | 0.6858 | 0.5634 |
| 0.7095 | 9.0 | 45 | 0.7070 | 0.4366 |
| 0.7012 | 10.0 | 50 | 0.6857 | 0.5634 |
| 0.6995 | 11.0 | 55 | 0.6969 | 0.4507 |
| 0.6913 | 12.0 | 60 | 0.6875 | 0.5634 |
| 0.6963 | 13.0 | 65 | 0.6959 | 0.4789 |
| 0.6996 | 14.0 | 70 | 0.7190 | 0.4366 |
| 0.6957 | 15.0 | 75 | 0.6963 | 0.5634 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alicenkbaytop/distilbert-base-uncased-date | alicenkbaytop | 2023-06-06T12:18:50Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-06-06T12:14:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-date
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-date
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2773
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 0.5215 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 2.0 | 2 | 0.4264 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 3.0 | 3 | 0.3649 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 4.0 | 4 | 0.3289 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 5.0 | 5 | 0.3099 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 6.0 | 6 | 0.2992 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 7.0 | 7 | 0.2920 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 8.0 | 8 | 0.2865 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 9.0 | 9 | 0.2821 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 10.0 | 10 | 0.2790 | 0.0 | 0.0 | 0.0 | 0.9259 |
| No log | 11.0 | 11 | 0.2773 | 0.0 | 0.0 | 0.0 | 0.9259 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_qqp | gokuls | 2023-06-06T12:13:40Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T07:46:10Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_qqp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QQP
type: glue
config: qqp
split: validation
args: qqp
metrics:
- name: Accuracy
type: accuracy
value: 0.8216176106851348
- name: F1
type: f1
value: 0.7561536380849337
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_qqp
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE QQP dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4029
- Accuracy: 0.8216
- F1: 0.7562
- Combined Score: 0.7889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.5044 | 1.0 | 2843 | 0.4468 | 0.7865 | 0.6961 | 0.7413 |
| 0.4102 | 2.0 | 5686 | 0.4359 | 0.7992 | 0.6935 | 0.7464 |
| 0.3553 | 3.0 | 8529 | 0.4127 | 0.8080 | 0.7105 | 0.7592 |
| 0.3122 | 4.0 | 11372 | 0.4029 | 0.8216 | 0.7562 | 0.7889 |
| 0.2756 | 5.0 | 14215 | 0.4481 | 0.8228 | 0.7518 | 0.7873 |
| 0.2479 | 6.0 | 17058 | 0.4778 | 0.8268 | 0.7633 | 0.7951 |
| 0.223 | 7.0 | 19901 | 0.4425 | 0.8158 | 0.7721 | 0.7939 |
| 0.2028 | 8.0 | 22744 | 0.4705 | 0.8267 | 0.7686 | 0.7977 |
| 0.183 | 9.0 | 25587 | 0.4908 | 0.8301 | 0.7659 | 0.7980 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thackerhelik/a2c-AntBulletEnv-v0 | thackerhelik | 2023-06-06T12:12:53Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T12:11:46Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1073.77 +/- 101.15
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/hBERTv1_new_pretrain_rte | gokuls | 2023-06-06T12:05:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T15:33:47Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5306859205776173
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_rte
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6896
- Accuracy: 0.5307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7407 | 1.0 | 20 | 0.7002 | 0.4729 |
| 0.7061 | 2.0 | 40 | 0.7245 | 0.4729 |
| 0.7102 | 3.0 | 60 | 0.6949 | 0.5271 |
| 0.703 | 4.0 | 80 | 0.6951 | 0.4729 |
| 0.7097 | 5.0 | 100 | 0.6974 | 0.4729 |
| 0.7006 | 6.0 | 120 | 0.7053 | 0.4729 |
| 0.6986 | 7.0 | 140 | 0.6896 | 0.5307 |
| 0.6935 | 8.0 | 160 | 0.7711 | 0.4729 |
| 0.6109 | 9.0 | 180 | 0.8443 | 0.4982 |
| 0.469 | 10.0 | 200 | 1.0369 | 0.5126 |
| 0.3028 | 11.0 | 220 | 1.1621 | 0.5235 |
| 0.2155 | 12.0 | 240 | 1.2096 | 0.5379 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mmaguero/gn-bert-large-cased | mmaguero | 2023-06-06T11:59:19Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"doi:10.57967/hf/0359",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-04T12:16:40Z | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: 'Paraguay ha''e peteĩ táva oĩva [MASK] retãme '
- text: Augusto Roa Bastos ha'e peteĩ [MASK] arandu
metrics:
- accuracy
- f1
---
# BERT-i-large-cased (gnBERT-large-cased)
A pre-trained BERT model for **Guarani** (24 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
# How cite?
```
@article{aguero-et-al2023multi-affect-low-langs-grn,
title={Multidimensional Affective Analysis for Low-resource Languages: A Use Case with Guarani-Spanish Code-switching Language},
author={Agüero-Torales, Marvin Matías, López-Herrera, Antonio Gabriel, and Vilares, David},
journal={Cognitive Computation},
year={2023},
publisher={Springer},
notes={Forthcoming}
}
``` |
mmaguero/gn-bert-tiny-cased | mmaguero | 2023-06-06T11:52:31Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"gn",
"dataset:wikipedia",
"dataset:wiktionary",
"doi:10.57967/hf/0358",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-04T12:24:57Z | ---
language: gn
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: 'Paraguay ha''e peteĩ táva oĩva [MASK] retãme '
- text: Augusto Roa Bastos ha'e peteĩ [MASK] arandu
metrics:
- f1
- accuracy
---
# BERT-i-tiny-cased (gnBERT-tiny-cased)
A pre-trained BERT model for **Guarani** (2 layers, cased). Trained on Wikipedia + Wiktionary (~800K tokens).
# How cite?
```
@article{aguero-et-al2023multi-affect-low-langs-grn,
title={Multidimensional Affective Analysis for Low-resource Languages: A Use Case with Guarani-Spanish Code-switching Language},
author={Agüero-Torales, Marvin Matías, López-Herrera, Antonio Gabriel, and Vilares, David},
journal={Cognitive Computation},
year={2023},
publisher={Springer},
notes={Forthcoming}
}
``` |
KHEW/LC2lora | KHEW | 2023-06-06T11:51:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T11:49:36Z | ---
license: creativeml-openrail-m
---
|
Gilung666/Ploypreya | Gilung666 | 2023-06-06T11:50:49Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T11:44:13Z | ---
license: creativeml-openrail-m
---
|
mmaguero/multilingual-bert-gn-base-cased | mmaguero | 2023-06-06T11:50:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"gn",
"multilingual",
"dataset:wikipedia",
"dataset:wiktionary",
"doi:10.57967/hf/0355",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-04T12:47:14Z | ---
language:
- gn
- multilingual
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: 'Paraguay ha''e peteĩ táva oĩva [MASK] retãme '
- text: Augusto Roa Bastos ha'e peteĩ [MASK] arandu
metrics:
- f1
- accuracy
---
# mBERT+gn-base-cased (multilingual-BERT+gn-base-cased)
[BERT multilingual base model (cased, pre-trained BERT model)](https://huggingface.co/bert-base-multilingual-cased) fine-tuned for **Guarani** language modeling (104 languages + gn). Trained on Wikipedia + Wiktionary (~800K tokens).
# How cite?
```
@article{aguero-et-al2023multi-affect-low-langs-grn,
title={Multidimensional Affective Analysis for Low-resource Languages: A Use Case with Guarani-Spanish Code-switching Language},
author={Agüero-Torales, Marvin Matías, López-Herrera, Antonio Gabriel, and Vilares, David},
journal={Cognitive Computation},
year={2023},
publisher={Springer},
notes={Forthcoming}
}
``` |
stabilityai/sd-vae-ft-mse | stabilityai | 2023-06-06T11:39:15Z | 136,154 | 367 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"license:mit",
"region:us"
] | null | 2022-10-13T12:50:55Z | ---
license: mit
tags:
- stable-diffusion
- stable-diffusion-diffusers
inference: false
---
# Improved Autoencoders
## Utilizing
These weights are intended to be used with the [🧨 diffusers library](https://github.com/huggingface/diffusers). If you are looking for the model to use with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion), [come here](https://huggingface.co/stabilityai/sd-vae-ft-mse-original).
#### How to use with 🧨 diffusers
You can integrate this fine-tuned VAE decoder to your existing `diffusers` workflows, by including a `vae` argument to the `StableDiffusionPipeline`
```py
from diffusers.models import AutoencoderKL
from diffusers import StableDiffusionPipeline
model = "CompVis/stable-diffusion-v1-4"
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
pipe = StableDiffusionPipeline.from_pretrained(model, vae=vae)
```
## Decoder Finetuning
We publish two kl-f8 autoencoder versions, finetuned from the original [kl-f8 autoencoder](https://github.com/CompVis/latent-diffusion#pretrained-autoencoding-models) on a 1:1 ratio of [LAION-Aesthetics](https://laion.ai/blog/laion-aesthetics/) and LAION-Humans, an unreleased subset containing only SFW images of humans. The intent was to fine-tune on the Stable Diffusion training set (the autoencoder was originally trained on OpenImages) but also enrich the dataset with images of humans to improve the reconstruction of faces.
The first, _ft-EMA_, was resumed from the original checkpoint, trained for 313198 steps and uses EMA weights. It uses the same loss configuration as the original checkpoint (L1 + LPIPS).
The second, _ft-MSE_, was resumed from _ft-EMA_ and uses EMA weights and was trained for another 280k steps using a different loss, with more emphasis
on MSE reconstruction (MSE + 0.1 * LPIPS). It produces somewhat ``smoother'' outputs. The batch size for both versions was 192 (16 A100s, batch size 12 per GPU).
To keep compatibility with existing models, only the decoder part was finetuned; the checkpoints can be used as a drop-in replacement for the existing autoencoder.
_Original kl-f8 VAE vs f8-ft-EMA vs f8-ft-MSE_
## Evaluation
### COCO 2017 (256x256, val, 5000 images)
| Model | train steps | rFID | PSNR | SSIM | PSIM | Link | Comments
|----------|---------|------|--------------|---------------|---------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| | | | | | | | |
| original | 246803 | 4.99 | 23.4 +/- 3.8 | 0.69 +/- 0.14 | 1.01 +/- 0.28 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | as used in SD |
| ft-EMA | 560001 | 4.42 | 23.8 +/- 3.9 | 0.69 +/- 0.13 | 0.96 +/- 0.27 | https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt | slightly better overall, with EMA |
| ft-MSE | 840001 | 4.70 | 24.5 +/- 3.7 | 0.71 +/- 0.13 | 0.92 +/- 0.27 | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt | resumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs |
### LAION-Aesthetics 5+ (256x256, subset, 10000 images)
| Model | train steps | rFID | PSNR | SSIM | PSIM | Link | Comments
|----------|-----------|------|--------------|---------------|---------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| | | | | | | | |
| original | 246803 | 2.61 | 26.0 +/- 4.4 | 0.81 +/- 0.12 | 0.75 +/- 0.36 | https://ommer-lab.com/files/latent-diffusion/kl-f8.zip | as used in SD |
| ft-EMA | 560001 | 1.77 | 26.7 +/- 4.8 | 0.82 +/- 0.12 | 0.67 +/- 0.34 | https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.ckpt | slightly better overall, with EMA |
| ft-MSE | 840001 | 1.88 | 27.3 +/- 4.7 | 0.83 +/- 0.11 | 0.65 +/- 0.34 | https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt | resumed with EMA from ft-EMA, emphasis on MSE (rec. loss = MSE + 0.1 * LPIPS), smoother outputs |
### Visual
_Visualization of reconstructions on 256x256 images from the COCO2017 validation dataset._
<p align="center">
<br>
<b>
256x256: ft-EMA (left), ft-MSE (middle), original (right)</b>
</p>
<p align="center">
<img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00025_merged.png />
</p>
<p align="center">
<img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00011_merged.png />
</p>
<p align="center">
<img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00037_merged.png />
</p>
<p align="center">
<img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00043_merged.png />
</p>
<p align="center">
<img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00053_merged.png />
</p>
<p align="center">
<img src=https://huggingface.co/stabilityai/stable-diffusion-decoder-finetune/resolve/main/eval/ae-decoder-tuning-reconstructions/merged/00029_merged.png />
</p>
|
Arjunj/my_awesome_eli5_clm-model | Arjunj | 2023-06-06T11:38:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-06T11:15:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7301
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8763 | 1.0 | 1149 | 3.7520 |
| 3.7809 | 2.0 | 2298 | 3.7339 |
| 3.7307 | 3.0 | 3447 | 3.7301 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alicenkbaytop/model_output | alicenkbaytop | 2023-06-06T11:28:03Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-28T10:01:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: model_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8540
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 0.9383 | 0.0 | 0.0 | 0.0 | 0.8438 |
| No log | 2.0 | 2 | 0.8540 | 0.0 | 0.0 | 0.0 | 0.9062 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wckdgod/Mridul | wckdgod | 2023-06-06T11:26:35Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2023-06-06T09:39:55Z | ---
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shubham09/bart_lfqa_sqaud | Shubham09 | 2023-06-06T11:23:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-06-06T09:39:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart_lfqa_sqaud
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_lfqa_sqaud
This model is a fine-tuned version of [vblagoje/bart_lfqa](https://huggingface.co/vblagoje/bart_lfqa) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 80 | 3.0473 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zulfi/my_awesome_model | zulfi | 2023-06-06T11:11:23Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T10:21:12Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: zulfi/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# zulfi/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2516
- Validation Loss: 0.1906
- Train Accuracy: 0.9248
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2516 | 0.1906 | 0.9248 | 0 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jpandeinge/whisper-base-oshiwambo-speech | jpandeinge | 2023-06-06T11:11:13Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-06-06T05:57:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
- precision
- recall
model-index:
- name: whisper-base-oshiwambo-speech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-oshiwambo-speech
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on [meyabase/crowd-oshiwambo-speech-greetings](https://huggingface.co/datasets/meyabase/crowd-oshiwambo-speech-greetings) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0834
- Wer: 80.9524
- Cer: 58.9623
- Word Acc: 82.2917
- Sent Acc: 54.2857
- Precision: 0.5097
- Recall: 0.7524
- F1 Score: 0.6077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Word Acc | Sent Acc | Precision | Recall | F1 Score |
|:-------------:|:-------:|:-----:|:---------------:|:-------:|:-------:|:--------:|:--------:|:---------:|:------:|:--------:|
| 0.0099 | 117.65 | 1000 | 0.0777 | 46.6667 | 31.6038 | 69.1358 | 11.4286 | 0.6914 | 0.5333 | 0.6022 |
| 0.0105 | 235.29 | 2000 | 0.0806 | 47.6190 | 33.2547 | 71.4286 | 11.4286 | 0.7143 | 0.5238 | 0.6044 |
| 0.0106 | 352.94 | 3000 | 0.0795 | 44.7619 | 34.6698 | 76.3158 | 25.7143 | 0.7632 | 0.5524 | 0.6409 |
| 0.0092 | 470.59 | 4000 | 0.0793 | 42.8571 | 35.8491 | 81.0811 | 31.4286 | 0.8108 | 0.5714 | 0.6704 |
| 0.0099 | 588.24 | 5000 | 0.0806 | 92.3810 | 69.8113 | 81.7073 | 42.8571 | 0.4752 | 0.6381 | 0.5447 |
| 0.0094 | 705.88 | 6000 | 0.0800 | 28.5714 | 22.1698 | 83.3333 | 48.5714 | 0.8333 | 0.7143 | 0.7692 |
| 0.0093 | 823.53 | 7000 | 0.0796 | 24.7619 | 16.2736 | 82.2917 | 54.2857 | 0.8229 | 0.7524 | 0.7861 |
| 0.0095 | 941.18 | 8000 | 0.0815 | 82.8571 | 59.1981 | 80.2083 | 51.4286 | 0.4968 | 0.7333 | 0.5923 |
| 0.01 | 1058.82 | 9000 | 0.0815 | 24.7619 | 16.5094 | 82.2917 | 54.2857 | 0.8229 | 0.7524 | 0.7861 |
| 0.0088 | 1176.47 | 10000 | 0.0834 | 80.9524 | 58.9623 | 82.2917 | 54.2857 | 0.5097 | 0.7524 | 0.6077 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PhysHunter/dqn-SpaceInvadersNoFrameskip-v4 | PhysHunter | 2023-06-06T10:52:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T10:52:05Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 387.00 +/- 119.54
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PhysHunter -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PhysHunter -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PhysHunter
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 30000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0005),
('learning_starts', 30000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Soyoung97/gec_kr | Soyoung97 | 2023-06-06T10:38:07Z | 62 | 2 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-10-25T14:07:39Z | ## Korean Grammatical Error Correction Model
maintainer: [Soyoung Yoon](https://soyoung97.github.io/profile/)
Official repository: [link](https://github.com/soyoung97/GEC-Korean)
Dataset request form: [link](https://forms.gle/kF9pvJbLGvnh8ZnQ6)
Demo: [link](https://huggingface.co/spaces/Soyoung97/gec-korean-demo)
Colab demo: [link](https://colab.research.google.com/drive/1CL__3CpkhBzxWUbvsQmPTQWWu1cWmJHa?usp=sharing)
### Sample code
```
import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration
tokenizer = PreTrainedTokenizerFast.from_pretrained('Soyoung97/gec_kr')
model = BartForConditionalGeneration.from_pretrained('Soyoung97/gec_kr')
text = '한국어는어렵다.'
raw_input_ids = tokenizer.encode(text)
input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id]
corrected_ids = model.generate(torch.tensor([input_ids]),
max_length=128,
eos_token_id=1, num_beams=4,
early_stopping=True, repetition_penalty=2.0)
output_text = tokenizer.decode(corrected_ids.squeeze().tolist(), skip_special_tokens=True)
output_text
>>> '한국어는 어렵다.'
```
Special thanks to the [KoBART-summarization repository](https://huggingface.co/gogamza/kobart-summarization) (referenced from it) |
FALCONBoy/whuh | FALCONBoy | 2023-06-06T10:32:52Z | 0 | 0 | fairseq | [
"fairseq",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-06-06T10:30:34Z | ---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- bertscore
library_name: fairseq
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ctojang/distilbert-base-uncased-distilled-clinc | ctojang | 2023-06-06T10:31:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T10:23:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.2
|
rawsh/multi-qa-MiniLM-distill-onnx-L6-cos-v1 | rawsh | 2023-06-06T10:13:26Z | 16 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:search_qa",
"dataset:eli5",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/QQP",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/Amazon-QA",
"dataset:embedding-data/WikiAnswers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-06T05:52:24Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
datasets:
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- search_qa
- eli5
- natural_questions
- trivia_qa
- embedding-data/QQP
- embedding-data/PAQ_pairs
- embedding-data/Amazon-QA
- embedding-data/WikiAnswers
---
# multi-qa-MiniLM-distill-onnx-L6-cos-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (ONNX runtime)
Using optimum
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
from transformers import Pipeline
import torch.nn.functional as F
import torch
# copied from the model card
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
class SentenceEmbeddingPipeline(Pipeline):
def _sanitize_parameters(self, **kwargs):
# we don't have any hyperameters to sanitize
preprocess_kwargs = {}
return preprocess_kwargs, {}, {}
def preprocess(self, inputs):
encoded_inputs = self.tokenizer(inputs, padding=True, truncation=True, return_tensors='pt')
return encoded_inputs
def _forward(self, model_inputs):
outputs = self.model(**model_inputs)
return {"outputs": outputs, "attention_mask": model_inputs["attention_mask"]}
def postprocess(self, model_outputs):
# Perform pooling
sentence_embeddings = mean_pooling(model_outputs["outputs"], model_outputs['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
return sentence_embeddings
# load optimized model
model_name = "rawsh/multi-qa-MiniLM-distill-onnx-L6-cos-v1"
model = ORTModelForFeatureExtraction.from_pretrained(model_name, file_name="model_quantized.onnx")
# create optimized pipeline
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
optimized_emb = SentenceEmbeddingPipeline(model=model, tokenizer=tokenizer)
pred1 = optimized_emb("Hello world!")
pred2 = optimized_emb("I hate everything.")
print(pred1[0].dot(pred2[0]))
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/multi-qa-MiniLM-L6-cos-v1')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## PyTorch Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
model = AutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## TensorFlow Usage (HuggingFace Transformers)
Similarly to the PyTorch example above, to use the model with TensorFlow you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state
input_mask_expanded = tf.cast(tf.tile(tf.expand_dims(attention_mask, -1), [1, 1, token_embeddings.shape[-1]]), tf.float32)
return tf.math.reduce_sum(token_embeddings * input_mask_expanded, 1) / tf.math.maximum(tf.math.reduce_sum(input_mask_expanded, 1), 1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='tf')
# Compute token embeddings
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = tf.math.l2_normalize(embeddings, axis=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
model = TFAutoModel.from_pretrained("sentence-transformers/multi-qa-MiniLM-L6-cos-v1")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = (query_emb @ tf.transpose(doc_emb))[0].numpy().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 384 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
----
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used for semantic search: It encodes queries / questions and text paragraphs in a dense vector space. It finds relevant documents for the given passages.
Note that there is a limit of 512 word pieces: Text longer than that will be truncated. Further note that the model was just trained on input text up to 250 word pieces. It might not work well for longer text.
## Training procedure
The full training script is accessible in this current repository: `train_script.py`.
### Pre-training
We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
#### Training
We use the concatenation from multiple datasets to fine-tune our model. In total we have about 215M (question, answer) pairs.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
The model was trained with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) using Mean-pooling, cosine-similarity as similarity function, and a scale of 20.
| Dataset | Number of training tuples |
|--------------------------------------------------------|:--------------------------:|
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs from WikiAnswers | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) Automatically generated (Question, Paragraph) pairs for each paragraph in Wikipedia | 64,371,441 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs from all StackExchanges | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs from all StackExchanges | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) Triplets (query, answer, hard_negative) for 500k queries from Bing search engine | 17,579,773 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) (query, answer) pairs for 3M Google queries and Google featured snippet | 3,012,496 |
| [Amazon-QA](http://jmcauley.ucsd.edu/data/amazon/qa/) (Question, Answer) pairs from Amazon product pages | 2,448,839
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) pairs from Yahoo Answers | 1,198,260 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) pairs from Yahoo Answers | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) pairs from Yahoo Answers | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) (Question, Answer) pairs for 140k questions, each with Top5 Google snippets on that question | 582,261 |
| [ELI5](https://huggingface.co/datasets/eli5) (Question, Answer) pairs from Reddit ELI5 (explainlikeimfive) | 325,475 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions pairs (titles) | 304,525 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) (Question, Duplicate_Question, Hard_Negative) triplets for Quora Questions Pairs dataset | 103,663 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) (Question, Paragraph) pairs for 100k real Google queries with relevant Wikipedia paragraph | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) (Question, Paragraph) pairs from SQuAD2.0 dataset | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) (Question, Evidence) pairs | 73,346 |
| **Total** | **214,988,242** | |
OpenAssistant/falcon-40b-sft-top1-560 | OpenAssistant | 2023-06-06T10:12:42Z | 84 | 50 | transformers | [
"transformers",
"pytorch",
"RefinedWeb",
"text-generation",
"sft",
"custom_code",
"en",
"de",
"es",
"fr",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-02T17:53:28Z | ---
license: apache-2.0
language:
- en
- de
- es
- fr
tags:
- sft
inference: false
datasets:
- OpenAssistant/oasst1
---
# Open-Assistant Falcon 40B SFT OASST-TOP1 Model
This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM.
It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens.
## Model Details
- **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-03_OpenAssistant_falcon-40b-sft-top1-560_sampling_noprefix2.json)
- **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/)
- **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/3lr77x4h) (Checkpoint: 560 steps)
- **License:** Apache 2.0
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord)
## Prompting
Two special tokens are used to mark the beginning of user and assistant turns:
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token.
Input prompt example:
```
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|>
```
The input ends with the `<|assistant|>` token to signal that the model should
start generating the assistant reply.
## Configuration Details
Model:
```
falcon-40b:
dtype: bf16
log_dir: "falcon_log_40b"
learning_rate: 5e-6
model_name: "tiiuae/falcon-40b"
deepspeed_config: configs/zero3_config_falcon.json
output_dir: falcon
weight_decay: 0.0
max_length: 2048
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 1
per_device_train_batch_size: 18
per_device_eval_batch_size: 10
eval_steps: 80
save_steps: 80
num_train_epochs: 8
save_total_limit: 4
use_flash_attention: false
residual_dropout: 0.3
residual_dropout_lima: true
sort_by_length: false
save_strategy: steps
```
Dataset:
```
oasst-top1:
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0
input_file_path: 2023-05-06_OASST_labels.jsonl.gz
val_split: 0.05
top_k: 1
``` |
Chen311/AngieLora | Chen311 | 2023-06-06T10:06:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T10:00:07Z | ---
license: creativeml-openrail-m
---
|
ketong3906/autotrain-iris_truncated-64451135750 | ketong3906 | 2023-06-06T09:44:56Z | 3 | 0 | transformers | [
"transformers",
"joblib",
"xgboost",
"autotrain",
"tabular",
"classification",
"tabular-classification",
"dataset:ketong3906/autotrain-data-iris_truncated",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | tabular-classification | 2023-06-06T09:41:46Z | ---
tags:
- autotrain
- tabular
- classification
- tabular-classification
datasets:
- ketong3906/autotrain-data-iris_truncated
co2_eq_emissions:
emissions: 0.9776538031455683
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 64451135750
- CO2 Emissions (in grams): 0.9777
## Validation Metrics
- Loss: 0.091
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
data.columns = ["feat_" + str(col) for col in data.columns]
predictions = model.predict(data) # or model.predict_proba(data)
``` |
wtcherr/sd-unsplash_5k_blur_61KS-model-control-lora | wtcherr | 2023-06-06T09:33:18Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"controlnet",
"control-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-06-06T04:34:19Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- controlnet
- control-lora
inference: true
---
# ControlLoRA text2image fine-tuning - https://huggingface.co/wtcherr/sd-unsplash_5k_blur_61KS-model-control-lora
These are ControlLoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the wtcherr/unsplash_5k_blur_61KS dataset. You can find some example images in the following.



|
ikinglopez1/ppo-LunarLander-v2 | ikinglopez1 | 2023-06-06T09:26:41Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T09:26:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 226.35 +/- 77.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gorilla-llm/gorilla-7b-hf-delta-v0 | gorilla-llm | 2023-06-06T09:14:40Z | 43 | 54 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"api",
"en",
"dataset:gorilla-llm/APIBench",
"arxiv:2305.15334",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-27T22:30:08Z | ---
license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
---
# Gorilla: Large Language Model Connected with Massive APIs
By Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez ([Project Website](https://shishirpatil.github.io/gorilla/))
[](https://arxiv.org/abs/2305.15334) [](https://discord.gg/3apqwwME) [](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
`Gorilla` enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
## Model Details
Gorilla can be either trained via standard finetuning or using our novel retriever-aware training pipeline. We release `gorilla-7b-hf-delta-v0`, a 0-shot finetuned LLM that can reliably use Hugging Face APIs. It can be prompted through simply natural language (e.g., "I want to generate an image from text."). Checkour our website, github and paper for more information.
### Model Type
Gorilla is an open-source API caller trained by fine-tuning LLaMA weights. It is an auto-regressive language model, based on the transformer architecture.
### Model Date
05/27/2023
### Organization
Gorilla LLM (UC Berkeley)
---
license: apache-2.0
--- |
gorilla-llm/gorilla-7b-th-delta-v0 | gorilla-llm | 2023-06-06T09:13:07Z | 10 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"api",
"en",
"dataset:gorilla-llm/APIBench",
"arxiv:2305.15334",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-28T12:44:06Z | ---
license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
---
# Gorilla: Large Language Model Connected with Massive APIs
By Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez ([Project Website](https://shishirpatil.github.io/gorilla/))
[](https://arxiv.org/abs/2305.15334) [](https://discord.gg/3apqwwME) [](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
`Gorilla` enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
## Model Details
Gorilla can be either trained via standard finetuning or using our novel retriever-aware training pipeline. We release `gorilla-7b-hf-delta-v0`, a 0-shot finetuned LLM that can reliably use Torch Hub APIs. It can be prompted through simply natural language (e.g., "I want to generate an image from text."). Checkour our website, github and paper for more information.
### Model Type
Gorilla is an open-source API caller trained by fine-tuning LLaMA weights. It is an auto-regressive language model, based on the transformer architecture.
### Model Date
05/27/2023
### Organization
Gorilla LLM (UC Berkeley)
---
license: apache-2.0
--- |
Lukas-S/Huggy | Lukas-S | 2023-06-06T09:04:37Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-06-06T09:04:30Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Lukas-S/Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Yhyu13/Nous-Hermes-13b-gptq-4bit | Yhyu13 | 2023-06-06T09:03:42Z | 8 | 4 | transformers | [
"transformers",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-06T09:00:59Z | ---
license: apache-2.0
---
GPTQ 4-bit no actor version for compatibility that works in textgen-webui
Generated by using scripts from https://gitee.com/yhyu13/llama_-tools
Original weight : https://huggingface.co/NousResearch/Nous-Hermes-13b |
yuvalkirstain/textual_inversion_cat | yuvalkirstain | 2023-06-06T08:57:58Z | 5 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-06T08:25:37Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - yuvalkirstain/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




|
vnykr/a2c-AntBulletEnv-v0 | vnykr | 2023-06-06T08:49:43Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T08:48:34Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 2164.45 +/- 71.97
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TheBloke/OpenAssistant-SFT-7-Llama-30B-HF | TheBloke | 2023-06-06T08:39:09Z | 1,570 | 14 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.07327",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-29T09:38:46Z | ---
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OpenAssistant LLaMA 30B SFT 7 HF
This in HF format repo of [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
It is the result of merging the XORs from the above repo with the original Llama 30B weights.
This is epoch 7 of OpenAssistant's training of a Llama 30B model.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card
```
llama-30b-sft-7:
dtype: fp16
log_dir: "llama_log_30b"
learning_rate: 1e-5
model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
#model_name: OpenAssistant/llama-30b-super-pretrain
output_dir: llama_model_30b
deepspeed_config: configs/zero3_config_sft.json
weight_decay: 0.0
residual_dropout: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 12
per_device_train_batch_size: 2
per_device_eval_batch_size: 3
eval_steps: 101
save_steps: 485
num_train_epochs: 4
save_total_limit: 3
use_custom_sampler: true
sort_by_length: false
#save_strategy: steps
save_strategy: epoch
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 1.0
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
```
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327
|
gokuls/hBERTv1_new_pretrain_w_init__qnli | gokuls | 2023-06-06T08:30:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T10:28:11Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init__qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.598572213069742
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init__qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6672
- Accuracy: 0.5986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6909 | 1.0 | 819 | 0.6783 | 0.5653 |
| 0.684 | 2.0 | 1638 | 0.6904 | 0.5100 |
| 0.6765 | 3.0 | 2457 | 0.6709 | 0.5881 |
| 0.6696 | 4.0 | 3276 | 0.6774 | 0.5695 |
| 0.6676 | 5.0 | 4095 | 0.6704 | 0.5903 |
| 0.6626 | 6.0 | 4914 | 0.6672 | 0.5986 |
| 0.6661 | 7.0 | 5733 | 0.6703 | 0.5907 |
| 0.6642 | 8.0 | 6552 | 0.6693 | 0.5960 |
| 0.6698 | 9.0 | 7371 | 0.6733 | 0.5799 |
| 0.6724 | 10.0 | 8190 | 0.6815 | 0.5636 |
| 0.68 | 11.0 | 9009 | 0.6908 | 0.5427 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HungDuy/Taxi-v3 | HungDuy | 2023-06-06T08:27:34Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T08:27:32Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HungDuy/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Uxinnn/a2c-AntBulletEnv-v0 | Uxinnn | 2023-06-06T08:22:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T08:21:42Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1415.03 +/- 151.44
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kowshikBlue/dummy_1 | kowshikBlue | 2023-06-06T08:03:16Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-06T08:02:58Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
gokuls/hBERTv1_new_pretrain_qnli | gokuls | 2023-06-06T07:59:43Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T11:10:23Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6031484532308256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6591
- Accuracy: 0.6031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6783 | 1.0 | 819 | 0.6740 | 0.5861 |
| 0.6609 | 2.0 | 1638 | 0.6591 | 0.6031 |
| 0.6594 | 3.0 | 2457 | 0.6743 | 0.5923 |
| 0.6438 | 4.0 | 3276 | 0.6644 | 0.5876 |
| 0.6421 | 5.0 | 4095 | 0.6731 | 0.5883 |
| 0.6488 | 6.0 | 4914 | 0.6720 | 0.5936 |
| 0.6432 | 7.0 | 5733 | 0.6781 | 0.5923 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_qnli | gokuls | 2023-06-06T07:58:00Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:49:54Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_48_qnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE QNLI
type: glue
config: qnli
split: validation
args: qnli
metrics:
- name: Accuracy
type: accuracy
value: 0.5837451949478308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_qnli
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE QNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6678
- Accuracy: 0.5837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6818 | 1.0 | 819 | 0.6782 | 0.5815 |
| 0.6686 | 2.0 | 1638 | 0.6678 | 0.5837 |
| 0.6472 | 3.0 | 2457 | 0.6738 | 0.5847 |
| 0.6311 | 4.0 | 3276 | 0.6779 | 0.5803 |
| 0.6142 | 5.0 | 4095 | 0.6802 | 0.5850 |
| 0.5969 | 6.0 | 4914 | 0.7076 | 0.5861 |
| 0.5814 | 7.0 | 5733 | 0.7672 | 0.5794 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/mmnist_MMVAEPlusconfig2_seed_0_ratio_05_i | asenella | 2023-06-06T07:55:08Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-25T12:04:33Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Gayathri142214002/t5-end2end-questions-generation_2 | Gayathri142214002 | 2023-06-06T07:49:32Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-05T09:40:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-end2end-questions-generation_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation_2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7103 | 0.13 | 10 | 1.7584 |
| 1.8298 | 0.26 | 20 | 1.3377 |
| 1.4424 | 0.39 | 30 | 1.1610 |
| 1.4063 | 0.52 | 40 | 1.0564 |
| 1.2738 | 0.65 | 50 | 1.0332 |
| 1.2477 | 0.78 | 60 | 0.9531 |
| 1.146 | 0.91 | 70 | 0.9050 |
| 1.0134 | 1.04 | 80 | 0.9388 |
| 0.8782 | 1.17 | 90 | 0.9215 |
| 0.8869 | 1.3 | 100 | 0.8930 |
| 0.8963 | 1.43 | 110 | 0.8996 |
| 0.9138 | 1.56 | 120 | 0.8616 |
| 0.7963 | 1.69 | 130 | 0.8060 |
| 0.8611 | 1.82 | 140 | 0.7611 |
| 1.0504 | 1.95 | 150 | 0.7606 |
| 0.6802 | 2.08 | 160 | 0.7791 |
| 0.7488 | 2.21 | 170 | 0.7470 |
| 0.6659 | 2.34 | 180 | 0.7367 |
| 0.7061 | 2.47 | 190 | 0.7194 |
| 0.6771 | 2.6 | 200 | 0.7006 |
| 0.7267 | 2.73 | 210 | 0.6858 |
| 0.7251 | 2.86 | 220 | 0.6797 |
| 0.7426 | 2.99 | 230 | 0.6877 |
| 0.5425 | 3.12 | 240 | 0.6994 |
| 0.5298 | 3.25 | 250 | 0.7096 |
| 0.697 | 3.38 | 260 | 0.6941 |
| 0.5643 | 3.51 | 270 | 0.6534 |
| 0.6983 | 3.64 | 280 | 0.6407 |
| 0.587 | 3.77 | 290 | 0.6404 |
| 0.6487 | 3.9 | 300 | 0.6489 |
| 0.5862 | 4.03 | 310 | 0.6567 |
| 0.5524 | 4.16 | 320 | 0.6610 |
| 0.5432 | 4.29 | 330 | 0.6609 |
| 0.5165 | 4.42 | 340 | 0.6558 |
| 0.5248 | 4.55 | 350 | 0.6387 |
| 0.5322 | 4.68 | 360 | 0.6319 |
| 0.5272 | 4.81 | 370 | 0.6214 |
| 0.5555 | 4.94 | 380 | 0.6252 |
| 0.597 | 5.06 | 390 | 0.6281 |
| 0.5745 | 5.19 | 400 | 0.6283 |
| 0.5156 | 5.32 | 410 | 0.6265 |
| 0.4898 | 5.45 | 420 | 0.6307 |
| 0.543 | 5.58 | 430 | 0.6280 |
| 0.5094 | 5.71 | 440 | 0.6295 |
| 0.5023 | 5.84 | 450 | 0.6279 |
| 0.4483 | 5.97 | 460 | 0.6228 |
| 0.5134 | 6.1 | 470 | 0.6239 |
| 0.5054 | 6.23 | 480 | 0.6230 |
| 0.4632 | 6.36 | 490 | 0.6205 |
| 0.5016 | 6.49 | 500 | 0.6212 |
| 0.4838 | 6.62 | 510 | 0.6219 |
| 0.4613 | 6.75 | 520 | 0.6225 |
| 0.5062 | 6.88 | 530 | 0.6223 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ocisd4/openllama_tokenizer_ext_zh | ocisd4 | 2023-06-06T07:38:11Z | 0 | 0 | null | [
"region:us"
] | null | 2023-06-02T03:35:29Z | ```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained(
'ocisd4/openllama_tokenizer_ext_zh',
add_bos_token=True,
add_eos_token=False,
use_auth_token='True',
)
print('vocab size:',tokenizer.vocab_size)
#vocab size: 52928
text = '今天天氣真好!'
print(tokenizer.tokenize(text))
#['▁', '今天', '天氣', '真', '好', '<0xEF>', '<0xBC>', '<0x81>']
print(tokenizer.encode(text))
#[1, 31822, 32101, 32927, 45489, 45301, 242, 191, 132]
print(tokenizer.decode(tokenizer.encode(text)))
# 今天天氣真好!</s>
```
** note: **
- The first token might be a whitespace in LLamaTokenizer.
- Open LlaMa的tokenizer is incompatible with original LlaMa
- This tokenizer will encode continuous spaces to ONE space
### updated
#### 2023-06-02
- add special tokens: <|pad|>, <|output|>, <|input|>, <|sep|>, <|emb|>, <|rwd|>, <|ctx|> |
ChrissieVR/Hi | ChrissieVR | 2023-06-06T07:29:52Z | 0 | 0 | nemo | [
"nemo",
"dataset:OpenAssistant/oasst1",
"license:openrail",
"region:us"
] | null | 2023-06-06T07:28:41Z | ---
license: openrail
datasets:
- OpenAssistant/oasst1
library_name: nemo
--- |
vind/rl_course_vizdoom_health_gathering_supreme | vind | 2023-06-06T07:25:58Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-06T07:25:40Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.05 +/- 5.94
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r vind/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
cessqq/1111 | cessqq | 2023-06-06T07:21:30Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-05-23T08:58:14Z | ---
metrics:
- bertscore
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BlueAvenir/dummy_1 | BlueAvenir | 2023-06-06T07:13:25Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-06-06T07:13:10Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
gokuls/hBERTv1_new_pretrain_w_init_48_mrpc | gokuls | 2023-06-06T07:02:57Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:49:57Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_w_init_48_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6838235294117647
- name: F1
type: f1
value: 0.8122270742358079
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6229
- Accuracy: 0.6838
- F1: 0.8122
- Combined Score: 0.7480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6607 | 1.0 | 29 | 0.6262 | 0.6838 | 0.8122 | 0.7480 |
| 0.6421 | 2.0 | 58 | 0.6368 | 0.6838 | 0.8122 | 0.7480 |
| 0.6411 | 3.0 | 87 | 0.6258 | 0.6838 | 0.8122 | 0.7480 |
| 0.6406 | 4.0 | 116 | 0.6422 | 0.6838 | 0.8122 | 0.7480 |
| 0.6364 | 5.0 | 145 | 0.6263 | 0.6838 | 0.8122 | 0.7480 |
| 0.6322 | 6.0 | 174 | 0.6253 | 0.6838 | 0.8122 | 0.7480 |
| 0.6398 | 7.0 | 203 | 0.6289 | 0.6838 | 0.8122 | 0.7480 |
| 0.6363 | 8.0 | 232 | 0.6267 | 0.6838 | 0.8122 | 0.7480 |
| 0.6374 | 9.0 | 261 | 0.6375 | 0.6838 | 0.8122 | 0.7480 |
| 0.6374 | 10.0 | 290 | 0.6248 | 0.6838 | 0.8122 | 0.7480 |
| 0.638 | 11.0 | 319 | 0.6262 | 0.6838 | 0.8122 | 0.7480 |
| 0.6353 | 12.0 | 348 | 0.6236 | 0.6838 | 0.8122 | 0.7480 |
| 0.6338 | 13.0 | 377 | 0.6263 | 0.6838 | 0.8122 | 0.7480 |
| 0.637 | 14.0 | 406 | 0.6250 | 0.6838 | 0.8122 | 0.7480 |
| 0.6375 | 15.0 | 435 | 0.6229 | 0.6838 | 0.8122 | 0.7480 |
| 0.7037 | 16.0 | 464 | 0.6438 | 0.6838 | 0.8122 | 0.7480 |
| 0.6198 | 17.0 | 493 | 0.6242 | 0.6961 | 0.8038 | 0.7499 |
| 0.5847 | 18.0 | 522 | 0.6260 | 0.6740 | 0.7742 | 0.7241 |
| 0.4983 | 19.0 | 551 | 0.7174 | 0.7034 | 0.8158 | 0.7596 |
| 0.4245 | 20.0 | 580 | 0.7737 | 0.6789 | 0.7828 | 0.7308 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CreatorPhan/ViSummary | CreatorPhan | 2023-06-06T07:01:46Z | 134 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"vi",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2023-06-05T21:02:45Z | ---
language:
- vi
library_name: transformers
pipeline_tag: summarization
---
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
device = 'cpu'
model_path = "CreatorPhan/ViSummary"
model = T5ForConditionalGeneration.from_pretrained(model_path).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_path)
context = """
Một yếu tố quan trọng khiến thương vụ Messi trở lại Barca có cơ hội lớn thành công là việc La Liga đã phê chuẩn kế hoạch cân bằng tài chính do Barca trình bày trong buổi họp gần đây. Điều này giúp đội bóng xứ Catalonia giải quyết vấn đề khúc mắc lớn nhất. Vào mùa hè năm 2021, Messi phải rời Barca sau 21 năm gắn bó do CLB không thể đáp ứng quy định tài chính của La Liga.
Messi trở thành cầu thủ tự do sau khi hết hai năm hợp đồng với PSG. Anh được nhiều CLB mời chào. Theo Athletic, có ba đội đang nhắm tới anh là Barca, Inter Miami (Mỹ) và một CLB Arab Saudi. Trong đó, chỉ có phía Saudi đưa ra đề nghị chính thức cho Messi, với hợp đồng trị giá 400 triệu USD mỗi năm.
Tuy nhiên, ở tuổi 35, Messi vẫn muốn trở lại Barca để cống hiến cho CLB đã làm nên tên tuổi của anh. Lúc này, đội chủ sân Nou Camp được dẫn dắt bởi HLV Xavi - đồng đội và là đàn anh chỉ dạy Messi trong những năm đầu sự nghiệp.
"""
tokens = tokenizer(f"Tóm tắt văn bản sau: {context}", return_tensors='pt').input_ids
output = model.generate(tokens.to(device), max_new_tokens=170)[0]
predict = tokenizer.decode(output, skip_special_tokens=True)
print(len(predict.split()))
print(predict)
``` |
Duskfallcrew/the-crystal-exarch-15 | Duskfallcrew | 2023-06-06T06:56:22Z | 52 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-03-04T11:33:42Z | ---
license: creativeml-openrail-m
base_model: andite/anything-v4.0
instance_prompt: FantasyMiq
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - the-crystal-exarch-15
These are LoRA adaption weights for [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). The weights were trained on the instance prompt "FantasyMiq" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
The safe tensors file was trained via LASTBEN's fast dreambooth adn does not require FantasyMIQ but does require the word Graha.
Ouputs are in afolder, will put some examples in here soon.
Model updates here: https://civitai.com/models/15890/graha-tia-ffxiv
Safetensors version was trained on Anything 3.0
|
gokuls/hBERTv2_new_pretrain_w_init_48_cola | gokuls | 2023-06-06T06:51:53Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:39:56Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.08208497144404353
- name: Accuracy
type: accuracy
value: 0.6836050152778625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6191
- Matthews Correlation: 0.0821
- Accuracy: 0.6836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6301 | 1.0 | 67 | 0.6293 | 0.0 | 0.6913 |
| 0.6238 | 2.0 | 134 | 0.6254 | 0.0 | 0.6913 |
| 0.6072 | 3.0 | 201 | 0.6271 | 0.0339 | 0.6759 |
| 0.5821 | 4.0 | 268 | 0.6191 | 0.0821 | 0.6836 |
| 0.5262 | 5.0 | 335 | 0.7057 | 0.1151 | 0.6510 |
| 0.4735 | 6.0 | 402 | 0.6756 | 0.1181 | 0.6577 |
| 0.4127 | 7.0 | 469 | 0.8493 | 0.1229 | 0.6711 |
| 0.349 | 8.0 | 536 | 0.8919 | 0.1434 | 0.6232 |
| 0.311 | 9.0 | 603 | 0.9018 | 0.1398 | 0.6769 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_w_init_48_cola | gokuls | 2023-06-06T06:49:38Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:36:57Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init_48_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init_48_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init_48) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6185
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6224 | 1.0 | 67 | 0.6200 | 0.0 | 0.6913 |
| 0.6183 | 2.0 | 134 | 0.6233 | 0.0 | 0.6913 |
| 0.6148 | 3.0 | 201 | 0.6241 | 0.0 | 0.6913 |
| 0.6146 | 4.0 | 268 | 0.6185 | 0.0 | 0.6913 |
| 0.6097 | 5.0 | 335 | 0.6187 | 0.0 | 0.6913 |
| 0.6094 | 6.0 | 402 | 0.6209 | 0.0 | 0.6913 |
| 0.6102 | 7.0 | 469 | 0.6328 | 0.0 | 0.6913 |
| 0.5814 | 8.0 | 536 | 0.6735 | 0.0 | 0.6913 |
| 0.5799 | 9.0 | 603 | 0.6648 | -0.0022 | 0.6788 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_48_mrpc | gokuls | 2023-06-06T06:49:21Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:41:47Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv1_new_pretrain_48_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7058823529411765
- name: F1
type: f1
value: 0.8058252427184466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_48_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5714
- Accuracy: 0.7059
- F1: 0.8058
- Combined Score: 0.7559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6764 | 1.0 | 29 | 0.5974 | 0.6887 | 0.8096 | 0.7492 |
| 0.6341 | 2.0 | 58 | 0.6032 | 0.6838 | 0.7962 | 0.7400 |
| 0.5778 | 3.0 | 87 | 0.5714 | 0.7059 | 0.8058 | 0.7559 |
| 0.4891 | 4.0 | 116 | 0.6448 | 0.7132 | 0.8104 | 0.7618 |
| 0.3469 | 5.0 | 145 | 0.8814 | 0.6593 | 0.7504 | 0.7049 |
| 0.2429 | 6.0 | 174 | 0.8431 | 0.6740 | 0.7654 | 0.7197 |
| 0.1749 | 7.0 | 203 | 1.0049 | 0.7010 | 0.7918 | 0.7464 |
| 0.1434 | 8.0 | 232 | 1.1036 | 0.6765 | 0.7634 | 0.7200 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_48_mrpc | gokuls | 2023-06-06T06:47:35Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:40:33Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_48_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.6936274509803921
- name: F1
type: f1
value: 0.8091603053435115
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_48_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_48) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5996
- Accuracy: 0.6936
- F1: 0.8092
- Combined Score: 0.7514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6634 | 1.0 | 29 | 0.6017 | 0.6863 | 0.7881 | 0.7372 |
| 0.6054 | 2.0 | 58 | 0.6601 | 0.6691 | 0.7316 | 0.7004 |
| 0.5623 | 3.0 | 87 | 0.5996 | 0.6936 | 0.8092 | 0.7514 |
| 0.4773 | 4.0 | 116 | 0.6380 | 0.7010 | 0.8057 | 0.7534 |
| 0.3781 | 5.0 | 145 | 0.8476 | 0.6471 | 0.7391 | 0.6931 |
| 0.258 | 6.0 | 174 | 0.8257 | 0.6642 | 0.7514 | 0.7078 |
| 0.2236 | 7.0 | 203 | 1.1873 | 0.6495 | 0.7451 | 0.6973 |
| 0.1818 | 8.0 | 232 | 1.2389 | 0.6029 | 0.6908 | 0.6469 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
johnyyhk/bert-finetuned-ner | johnyyhk | 2023-06-06T06:43:06Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-27T08:33:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.7087087087087087
- name: Recall
type: recall
value: 0.7866666666666666
- name: F1
type: f1
value: 0.74565560821485
- name: Accuracy
type: accuracy
value: 0.9507519905632557
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1783
- Precision: 0.7087
- Recall: 0.7867
- F1: 0.7457
- Accuracy: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 88 | 0.3405 | 0.5117 | 0.5833 | 0.5452 | 0.8924 |
| No log | 2.0 | 176 | 0.1943 | 0.6469 | 0.7633 | 0.7003 | 0.9446 |
| No log | 3.0 | 264 | 0.1783 | 0.7087 | 0.7867 | 0.7457 | 0.9508 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
OpenDILabCommunity/BipedalWalker-v3-A2C | OpenDILabCommunity | 2023-06-06T06:42:00Z | 0 | 0 | pytorch | [
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"BipedalWalker-v3",
"en",
"license:apache-2.0",
"region:us"
] | reinforcement-learning | 2023-06-06T06:41:51Z | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- BipedalWalker-v3
benchmark_name: OpenAI/Gym/Box2d
task_name: BipedalWalker-v3
pipeline_tag: reinforcement-learning
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/Box2d-BipedalWalker-v3
type: OpenAI/Gym/Box2d-BipedalWalker-v3
metrics:
- type: mean_reward
value: 277.68 +/- 0.19
name: mean_reward
---
# Play **BipedalWalker-v3** with **A2C** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **A2C** implementation to OpenAI/Gym/Box2d **BipedalWalker-v3** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import A2CAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py"))
# Instantiate the agent
agent = A2CAgent(
env="bipedalwalker", exp_name="BipedalWalker-v3-A2C", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import A2CAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/BipedalWalker-v3-A2C")
# Instantiate the agent
agent = A2CAgent(
env="bipedalwalker",
exp_name="BipedalWalker-v3-A2C",
cfg=cfg.exp_config,
policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import A2CAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = A2CAgent("bipedalwalker", exp_name="BipedalWalker-v3-A2C")
# Train the agent
return_ = agent.train(step=int(5000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Box2d",
task_name="BipedalWalker-v3",
algo_name="A2C",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/a2c.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/bipedalwalker.html",
installation_guide="pip3 install DI-engine[common_env]",
usage_file_by_git_clone="./a2c/bipedalwalker_a2c_deploy.py",
usage_file_by_huggingface_ding="./a2c/bipedalwalker_a2c_download.py",
train_file="./a2c/bipedalwalker_a2c.py",
repo_id="OpenDILabCommunity/BipedalWalker-v3-A2C"
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 10000000000,
'n_evaluator_episode': 8,
'env_id': 'BipedalWalker-v3',
'collector_env_num': 8,
'evaluator_env_num': 8,
'act_scale': True,
'rew_clip': True
},
'policy': {
'model': {
'action_space': 'continuous',
'obs_shape': 24,
'action_shape': 4
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 1,
'batch_size': 64,
'learning_rate': 0.0003,
'betas': [0.9, 0.999],
'eps': 1e-08,
'grad_norm': 0.5,
'value_weight': 0.7,
'entropy_weight': 0.0005,
'adv_norm': True,
'ignore_done': False,
'discount_factor': 0.99
},
'collect': {
'collector': {},
'unroll_len': 1,
'discount_factor': 0.99,
'gae_lambda': 0.95,
'n_sample': 64
},
'eval': {
'evaluator': {
'eval_freq': 1000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 10000000000,
'n_episode': 8
}
},
'other': {
'replay_buffer': {}
},
'on_policy': True,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'a2c',
'priority': False,
'priority_IS_weight': False,
'action_space': 'continuous',
'cfg_type': 'A2CPolicyDict'
},
'exp_name': 'BipedalWalker-v3-A2C',
'seed': 0,
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/BipedalWalker-v3-A2C)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/a2c.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/BipedalWalker-v3-A2C/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/BipedalWalker-v3-A2C/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 395.32 KB
- **Last Update Date:** 2023-06-06
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Box2d
- **Task:** BipedalWalker-v3
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.8
- **PyTorch version:** 1.7.1
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/bipedalwalker.html)
|
gokuls/hBERTv2_new_pretrain_w_init_48_sst2 | gokuls | 2023-06-06T06:39:38Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T05:58:25Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv2_new_pretrain_w_init_48_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8268348623853211
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3998
- Accuracy: 0.8268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3448 | 1.0 | 527 | 0.3998 | 0.8268 |
| 0.2102 | 2.0 | 1054 | 0.4903 | 0.8337 |
| 0.1588 | 3.0 | 1581 | 0.4602 | 0.8337 |
| 0.126 | 4.0 | 2108 | 0.5509 | 0.8429 |
| 0.1044 | 5.0 | 2635 | 0.4929 | 0.8108 |
| 0.0875 | 6.0 | 3162 | 0.5351 | 0.8257 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_w_init__mrpc | gokuls | 2023-06-06T06:39:04Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-06-06T06:32:20Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_w_init__mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7058823529411765
- name: F1
type: f1
value: 0.8192771084337349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init__mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5908
- Accuracy: 0.7059
- F1: 0.8193
- Combined Score: 0.7626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6576 | 1.0 | 29 | 0.5908 | 0.7059 | 0.8193 | 0.7626 |
| 0.6172 | 2.0 | 58 | 0.6228 | 0.6495 | 0.7433 | 0.6964 |
| 0.5641 | 3.0 | 87 | 0.6026 | 0.6936 | 0.7780 | 0.7358 |
| 0.4682 | 4.0 | 116 | 0.6339 | 0.7034 | 0.7973 | 0.7504 |
| 0.3677 | 5.0 | 145 | 0.9408 | 0.6495 | 0.7307 | 0.6901 |
| 0.2183 | 6.0 | 174 | 0.8311 | 0.6544 | 0.7478 | 0.7011 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_mrpc | gokuls | 2023-06-06T06:35:51Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T09:42:51Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: hBERTv2_new_pretrain_mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7034313725490197
- name: F1
type: f1
value: 0.8118195956454122
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_mrpc
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5990
- Accuracy: 0.7034
- F1: 0.8118
- Combined Score: 0.7576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6721 | 1.0 | 29 | 0.6200 | 0.6838 | 0.8122 | 0.7480 |
| 0.6229 | 2.0 | 58 | 0.6098 | 0.6569 | 0.7255 | 0.6912 |
| 0.5689 | 3.0 | 87 | 0.5990 | 0.7034 | 0.8118 | 0.7576 |
| 0.4615 | 4.0 | 116 | 0.6689 | 0.6765 | 0.78 | 0.7282 |
| 0.3475 | 5.0 | 145 | 0.8472 | 0.6054 | 0.6774 | 0.6414 |
| 0.2307 | 6.0 | 174 | 0.9917 | 0.6103 | 0.6913 | 0.6508 |
| 0.166 | 7.0 | 203 | 1.1149 | 0.6544 | 0.7522 | 0.7033 |
| 0.1258 | 8.0 | 232 | 1.3516 | 0.625 | 0.7119 | 0.6684 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_w_init__cola | gokuls | 2023-06-06T06:30:52Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T10:08:33Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_w_init__cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_w_init__cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_wt_init](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_wt_init) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6171
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6355 | 1.0 | 67 | 0.6239 | 0.0 | 0.6913 |
| 0.6177 | 2.0 | 134 | 0.6211 | 0.0 | 0.6913 |
| 0.6142 | 3.0 | 201 | 0.6231 | 0.0 | 0.6913 |
| 0.6145 | 4.0 | 268 | 0.6171 | 0.0 | 0.6913 |
| 0.6102 | 5.0 | 335 | 0.6199 | 0.0 | 0.6913 |
| 0.6126 | 6.0 | 402 | 0.6184 | 0.0 | 0.6913 |
| 0.6127 | 7.0 | 469 | 0.6206 | 0.0 | 0.6913 |
| 0.6107 | 8.0 | 536 | 0.6185 | 0.0 | 0.6913 |
| 0.6086 | 9.0 | 603 | 0.6260 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv2_new_pretrain_cola | gokuls | 2023-06-06T06:27:33Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T09:32:57Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: hBERTv2_new_pretrain_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6173
- Matthews Correlation: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6294 | 1.0 | 67 | 0.6236 | 0.0 |
| 0.6169 | 2.0 | 134 | 0.6312 | 0.0 |
| 0.6115 | 3.0 | 201 | 0.6173 | 0.0 |
| 0.6372 | 4.0 | 268 | 0.6201 | 0.0 |
| 0.6087 | 5.0 | 335 | 0.6217 | 0.0 |
| 0.6086 | 6.0 | 402 | 0.6248 | 0.0 |
| 0.6113 | 7.0 | 469 | 0.6283 | 0.0 |
| 0.6109 | 8.0 | 536 | 0.6200 | 0.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/hBERTv1_new_pretrain_sst2 | gokuls | 2023-06-06T06:27:01Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-31T08:52:07Z | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: hBERTv1_new_pretrain_sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.7878440366972477
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_sst2
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4752
- Accuracy: 0.7878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4258 | 1.0 | 527 | 0.4994 | 0.8062 |
| 0.2652 | 2.0 | 1054 | 0.5633 | 0.8005 |
| 0.2214 | 3.0 | 1581 | 0.4752 | 0.7878 |
| 0.2014 | 4.0 | 2108 | 0.5329 | 0.7890 |
| 0.1813 | 5.0 | 2635 | 0.5410 | 0.7924 |
| 0.1679 | 6.0 | 3162 | 0.5857 | 0.8085 |
| 0.1526 | 7.0 | 3689 | 0.7654 | 0.8039 |
| 0.1405 | 8.0 | 4216 | 0.6715 | 0.7878 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AXX1995/adindaaprillia | AXX1995 | 2023-06-06T06:21:27Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T06:19:09Z | ---
license: creativeml-openrail-m
---
|
Subsets and Splits