modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
MichaelKonu/MoneyMike | MichaelKonu | 2023-07-15T19:29:54Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2023-07-15T19:21:09Z | ---
tags:
- fastai
---
# Model card
## Model description
Classifies three types of bears: teddy, black, grizzly
## Intended uses & limitations
For fun
## Training and evaluation data
ddg
|
KingKazma/xsum_t5-small_lora_500_10_3000_8_e-1_s6789_v3_manual | KingKazma | 2023-07-15T19:27:13Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-15T19:27:12Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
chainsurfer/q-Taxi-v3 | chainsurfer | 2023-07-15T19:21:45Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T19:03:24Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chainsurfer/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
IAyoub/finetuning-sentiment-model-base-zero-shot | IAyoub | 2023-07-15T19:21:16Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-15T17:13:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-base-zero-shot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-base-zero-shot
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5560
- Accuracy: 0.8015
- F1: 0.5511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.02 | 10 | 0.8518 | 0.6738 | 0.2684 |
| No log | 0.03 | 20 | 0.7875 | 0.6738 | 0.2684 |
| No log | 0.05 | 30 | 0.7443 | 0.6738 | 0.2684 |
| No log | 0.07 | 40 | 0.7358 | 0.6746 | 0.2706 |
| No log | 0.08 | 50 | 0.7233 | 0.6742 | 0.2695 |
| No log | 0.1 | 60 | 0.6832 | 0.7148 | 0.3657 |
| No log | 0.12 | 70 | 0.6272 | 0.7735 | 0.4807 |
| No log | 0.13 | 80 | 0.5994 | 0.7910 | 0.4960 |
| No log | 0.15 | 90 | 0.5908 | 0.7898 | 0.5113 |
| No log | 0.17 | 100 | 0.5985 | 0.7982 | 0.5031 |
| No log | 0.18 | 110 | 0.5920 | 0.7965 | 0.5006 |
| No log | 0.2 | 120 | 0.5661 | 0.8053 | 0.5186 |
| No log | 0.22 | 130 | 0.5900 | 0.8015 | 0.5092 |
| No log | 0.23 | 140 | 0.5671 | 0.8023 | 0.5189 |
| No log | 0.25 | 150 | 0.6000 | 0.8044 | 0.5114 |
| No log | 0.27 | 160 | 0.5931 | 0.7785 | 0.5122 |
| No log | 0.28 | 170 | 0.5477 | 0.8065 | 0.5220 |
| No log | 0.3 | 180 | 0.5573 | 0.8107 | 0.5206 |
| No log | 0.32 | 190 | 0.5586 | 0.7961 | 0.5206 |
| No log | 0.34 | 200 | 0.5498 | 0.8107 | 0.5247 |
| No log | 0.35 | 210 | 0.5829 | 0.8036 | 0.5082 |
| No log | 0.37 | 220 | 0.5731 | 0.7843 | 0.5124 |
| No log | 0.39 | 230 | 0.5704 | 0.7915 | 0.5179 |
| No log | 0.4 | 240 | 0.5409 | 0.8070 | 0.5217 |
| No log | 0.42 | 250 | 0.5486 | 0.8120 | 0.5237 |
| No log | 0.44 | 260 | 0.5640 | 0.8082 | 0.5179 |
| No log | 0.45 | 270 | 0.5525 | 0.8086 | 0.5182 |
| No log | 0.47 | 280 | 0.5426 | 0.8086 | 0.5260 |
| No log | 0.49 | 290 | 0.5599 | 0.8040 | 0.5090 |
| No log | 0.5 | 300 | 0.5504 | 0.8124 | 0.5244 |
| No log | 0.52 | 310 | 0.5561 | 0.8074 | 0.5149 |
| No log | 0.54 | 320 | 0.5511 | 0.8061 | 0.5198 |
| No log | 0.55 | 330 | 0.5574 | 0.8082 | 0.5194 |
| No log | 0.57 | 340 | 0.5468 | 0.8099 | 0.5228 |
| No log | 0.59 | 350 | 0.5518 | 0.7990 | 0.5262 |
| No log | 0.6 | 360 | 0.5482 | 0.8099 | 0.5301 |
| No log | 0.62 | 370 | 0.5409 | 0.8111 | 0.5364 |
| No log | 0.64 | 380 | 0.5495 | 0.8103 | 0.5378 |
| No log | 0.65 | 390 | 0.5508 | 0.8111 | 0.5362 |
| No log | 0.67 | 400 | 0.5618 | 0.8011 | 0.5275 |
| No log | 0.69 | 410 | 0.5490 | 0.8103 | 0.5306 |
| No log | 0.7 | 420 | 0.5476 | 0.8116 | 0.5238 |
| No log | 0.72 | 430 | 0.5414 | 0.8090 | 0.5306 |
| No log | 0.74 | 440 | 0.5293 | 0.8153 | 0.5293 |
| No log | 0.75 | 450 | 0.5595 | 0.8141 | 0.5339 |
| No log | 0.77 | 460 | 0.5298 | 0.8132 | 0.5384 |
| No log | 0.79 | 470 | 0.5309 | 0.8132 | 0.5359 |
| No log | 0.8 | 480 | 0.5329 | 0.8132 | 0.5238 |
| No log | 0.82 | 490 | 0.5305 | 0.8132 | 0.5314 |
| 0.5831 | 0.84 | 500 | 0.5560 | 0.8015 | 0.5511 |
| 0.5831 | 0.85 | 510 | 0.5207 | 0.8162 | 0.5393 |
| 0.5831 | 0.87 | 520 | 0.5607 | 0.8070 | 0.5481 |
| 0.5831 | 0.89 | 530 | 0.5321 | 0.8120 | 0.5317 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Hedayat-Abrishami/a2c-AntBulletEnv-v0 | Hedayat-Abrishami | 2023-07-15T19:19:12Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T23:40:13Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1723.82 +/- 66.74
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
monideep2255/spell_correction_M04_verification | monideep2255 | 2023-07-15T19:01:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-15T18:10:56Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: spell_correction_M04_verification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spell_correction_M04_verification
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 269 | 0.3070 |
| 1.8826 | 2.0 | 538 | 0.0769 |
| 1.8826 | 3.0 | 807 | 0.0592 |
| 0.0711 | 4.0 | 1076 | 0.0577 |
| 0.0711 | 5.0 | 1345 | 0.0563 |
| 0.04 | 6.0 | 1614 | 0.0562 |
| 0.04 | 7.0 | 1883 | 0.0560 |
| 0.0265 | 8.0 | 2152 | 0.0544 |
| 0.0265 | 9.0 | 2421 | 0.0540 |
| 0.0196 | 10.0 | 2690 | 0.0534 |
| 0.0196 | 11.0 | 2959 | 0.0548 |
| 0.015 | 12.0 | 3228 | 0.0552 |
| 0.015 | 13.0 | 3497 | 0.0578 |
| 0.0123 | 14.0 | 3766 | 0.0591 |
| 0.0116 | 15.0 | 4035 | 0.0578 |
| 0.0116 | 16.0 | 4304 | 0.0580 |
| 0.0091 | 17.0 | 4573 | 0.0592 |
| 0.0091 | 18.0 | 4842 | 0.0596 |
| 0.0088 | 19.0 | 5111 | 0.0605 |
| 0.0088 | 20.0 | 5380 | 0.0569 |
| 0.0074 | 21.0 | 5649 | 0.0598 |
| 0.0074 | 22.0 | 5918 | 0.0587 |
| 0.0078 | 23.0 | 6187 | 0.0589 |
| 0.0078 | 24.0 | 6456 | 0.0586 |
| 0.0068 | 25.0 | 6725 | 0.0588 |
| 0.0068 | 26.0 | 6994 | 0.0591 |
| 0.0076 | 27.0 | 7263 | 0.0590 |
| 0.0072 | 28.0 | 7532 | 0.0587 |
| 0.0072 | 29.0 | 7801 | 0.0587 |
| 0.0059 | 30.0 | 8070 | 0.0588 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
madoe001/ppo-SnowballTarget | madoe001 | 2023-07-15T18:57:02Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-07-15T18:56:59Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: madoe001/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
said10/classification_model_hotel_demo | said10 | 2023-07-15T18:56:41Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-15T18:50:47Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: said10/classification_model_hotel_demo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# said10/classification_model_hotel_demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5752
- Validation Loss: 0.5130
- Train Accuracy: 0.94
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 115, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1384 | 0.9133 | 0.8 | 0 |
| 0.7682 | 0.6438 | 0.88 | 1 |
| 0.5752 | 0.5130 | 0.94 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mitra-mir/setfit_model_Calgary_epochs2 | mitra-mir | 2023-07-15T18:33:07Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-07-15T02:19:18Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 59 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 118,
"warmup_steps": 12,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
san94/tiny-random-GPT2LMHeadModel-finetuned-corpus | san94 | 2023-07-15T18:32:11Z | 154 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-04T12:41:52Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: tiny-random-GPT2LMHeadModel-finetuned-corpus
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-random-GPT2LMHeadModel-finetuned-corpus
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4433 | 1.0 | 1063 | 4.2789 |
| 3.7013 | 2.0 | 2126 | 4.2512 |
| 3.0412 | 3.0 | 3189 | 4.4497 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Naruke/ppo-LunarLander-v2 | Naruke | 2023-07-15T18:25:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T18:24:39Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 291.25 +/- 14.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NasimB/aggregate-all-best-so-far | NasimB | 2023-07-15T18:23:51Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T16:26:00Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: aggregate-all-best-so-far
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aggregate-all-best-so-far
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.686 | 0.3 | 500 | 5.6397 |
| 5.3431 | 0.6 | 1000 | 5.2192 |
| 5.0064 | 0.89 | 1500 | 4.9772 |
| 4.7469 | 1.19 | 2000 | 4.8431 |
| 4.5938 | 1.49 | 2500 | 4.7258 |
| 4.4972 | 1.79 | 3000 | 4.6345 |
| 4.3601 | 2.08 | 3500 | 4.5766 |
| 4.2 | 2.38 | 4000 | 4.5205 |
| 4.1717 | 2.68 | 4500 | 4.4612 |
| 4.1257 | 2.98 | 5000 | 4.4102 |
| 3.8873 | 3.28 | 5500 | 4.4068 |
| 3.8774 | 3.57 | 6000 | 4.3738 |
| 3.8522 | 3.87 | 6500 | 4.3392 |
| 3.6911 | 4.17 | 7000 | 4.3476 |
| 3.5905 | 4.47 | 7500 | 4.3367 |
| 3.5827 | 4.76 | 8000 | 4.3230 |
| 3.5304 | 5.06 | 8500 | 4.3246 |
| 3.3915 | 5.36 | 9000 | 4.3290 |
| 3.4003 | 5.66 | 9500 | 4.3258 |
| 3.3934 | 5.96 | 10000 | 4.3253 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nlp-lab-2023-seq2seq/R-facebook-bart-base-full-ft-with-tum-nlp-german-gpt2_easy-prior-pp-no-ls-4c77 | nlp-lab-2023-seq2seq | 2023-07-15T18:23:21Z | 30 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-15T11:11:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- sacrebleu
- bleu
- rouge
model-index:
- name: R-facebook-bart-base-full-ft-with-tum-nlp-german-gpt2_easy-prior-pp-no-ls-4c77
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# R-facebook-bart-base-full-ft-with-tum-nlp-german-gpt2_easy-prior-pp-no-ls-4c77
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1506
- Sacrebleu: 7.6134
- Bleu: 0.0761
- Rouge1: 0.3006
- Rouge2: 0.1038
- Rougel: 0.2079
- Sari: 39.5909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Bleu | Rouge1 | Rouge2 | Rougel | Sari |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:------:|:------:|:-------:|
| 6.9721 | 0.25 | 100 | 4.1739 | 1.8048 | 0.0180 | 0.1980 | 0.0611 | 0.1541 | 37.1235 |
| 3.8977 | 0.5 | 200 | 4.0984 | 1.2756 | 0.0128 | 0.2076 | 0.0678 | 0.1581 | 37.6186 |
| 4.035 | 0.75 | 300 | 4.0622 | 2.6499 | 0.0265 | 0.2271 | 0.0740 | 0.1741 | 38.1373 |
| 8.2055 | 0.99 | 400 | 4.0561 | 2.7363 | 0.0274 | 0.2332 | 0.0804 | 0.1716 | 38.0851 |
| 3.6957 | 1.24 | 500 | 4.0262 | 3.5110 | 0.0351 | 0.2560 | 0.0852 | 0.1852 | 37.9403 |
| 3.0846 | 1.49 | 600 | 4.0121 | 3.2967 | 0.0330 | 0.2471 | 0.0815 | 0.1799 | 37.5590 |
| 3.283 | 1.74 | 700 | 4.0510 | 3.8512 | 0.0385 | 0.2602 | 0.0917 | 0.1951 | 38.0037 |
| 4.7429 | 1.99 | 800 | 4.0048 | 3.4891 | 0.0349 | 0.2524 | 0.0850 | 0.1877 | 38.0324 |
| 3.024 | 2.24 | 900 | 3.9860 | 3.9202 | 0.0392 | 0.2633 | 0.0844 | 0.1891 | 37.9931 |
| 5.6861 | 2.49 | 1000 | 4.0493 | 4.4801 | 0.0448 | 0.2622 | 0.0878 | 0.1926 | 38.2052 |
| 3.6185 | 2.74 | 1100 | 4.0394 | 3.6710 | 0.0367 | 0.2608 | 0.0857 | 0.1866 | 37.9620 |
| 3.3582 | 2.98 | 1200 | 4.0004 | 5.1257 | 0.0513 | 0.2695 | 0.0922 | 0.1956 | 38.4845 |
| 5.0036 | 3.23 | 1300 | 4.0223 | 5.3256 | 0.0533 | 0.2752 | 0.0938 | 0.1975 | 38.6943 |
| 3.9904 | 3.48 | 1400 | 4.0040 | 5.0070 | 0.0501 | 0.2744 | 0.0927 | 0.1951 | 38.5338 |
| 3.1496 | 3.73 | 1500 | 4.0282 | 5.9234 | 0.0592 | 0.2803 | 0.0907 | 0.2002 | 38.2119 |
| 3.9604 | 3.98 | 1600 | 4.0253 | 5.1875 | 0.0519 | 0.2658 | 0.0864 | 0.1920 | 38.2336 |
| 2.9813 | 4.23 | 1700 | 4.0148 | 5.9589 | 0.0596 | 0.2891 | 0.0976 | 0.2028 | 38.8216 |
| 3.5448 | 4.48 | 1800 | 4.0071 | 5.2759 | 0.0528 | 0.2736 | 0.0867 | 0.1894 | 37.8800 |
| 3.6836 | 4.72 | 1900 | 4.0105 | 5.1414 | 0.0514 | 0.2750 | 0.0894 | 0.1982 | 38.3898 |
| 4.0471 | 4.97 | 2000 | 3.9788 | 5.5747 | 0.0557 | 0.2792 | 0.0932 | 0.1973 | 38.5705 |
| 3.3437 | 5.22 | 2100 | 4.0057 | 5.3969 | 0.0540 | 0.2827 | 0.0926 | 0.1978 | 38.3453 |
| 3.1657 | 5.47 | 2200 | 4.0439 | 5.4820 | 0.0548 | 0.2861 | 0.0946 | 0.2071 | 38.4004 |
| 2.5486 | 5.72 | 2300 | 4.0315 | 6.1738 | 0.0617 | 0.2896 | 0.0966 | 0.2048 | 38.5404 |
| 3.6148 | 5.97 | 2400 | 4.0056 | 6.5570 | 0.0656 | 0.2941 | 0.1046 | 0.2072 | 39.0698 |
| 3.1477 | 6.22 | 2500 | 4.0612 | 6.2221 | 0.0622 | 0.2806 | 0.0932 | 0.1998 | 38.5211 |
| 3.175 | 6.47 | 2600 | 4.0126 | 6.6920 | 0.0669 | 0.2916 | 0.1037 | 0.2122 | 39.1438 |
| 4.6616 | 6.71 | 2700 | 4.0467 | 6.0344 | 0.0603 | 0.2804 | 0.0953 | 0.1983 | 38.4171 |
| 3.109 | 6.96 | 2800 | 4.0420 | 5.8656 | 0.0587 | 0.2864 | 0.0983 | 0.2034 | 38.7225 |
| 3.0659 | 7.21 | 2900 | 4.0613 | 5.6029 | 0.0560 | 0.2839 | 0.0938 | 0.1980 | 38.7136 |
| 2.658 | 7.46 | 3000 | 4.0726 | 6.2791 | 0.0628 | 0.2824 | 0.0947 | 0.1972 | 38.6330 |
| 3.178 | 7.71 | 3100 | 4.0437 | 6.4351 | 0.0644 | 0.2924 | 0.0956 | 0.2032 | 38.6577 |
| 4.0606 | 7.96 | 3200 | 4.0644 | 6.6271 | 0.0663 | 0.2966 | 0.1019 | 0.2088 | 39.1513 |
| 3.664 | 8.21 | 3300 | 4.0615 | 6.3354 | 0.0634 | 0.2961 | 0.0981 | 0.2024 | 38.6904 |
| 2.8457 | 8.46 | 3400 | 4.0861 | 7.4278 | 0.0743 | 0.2975 | 0.1025 | 0.2017 | 39.0452 |
| 3.3883 | 8.7 | 3500 | 4.1037 | 6.4498 | 0.0645 | 0.2826 | 0.0955 | 0.2008 | 38.5961 |
| 5.4189 | 8.95 | 3600 | 4.1099 | 6.0065 | 0.0601 | 0.2946 | 0.0952 | 0.2020 | 38.6177 |
| 3.2093 | 9.2 | 3700 | 4.1074 | 6.2514 | 0.0625 | 0.2933 | 0.0942 | 0.2014 | 38.7227 |
| 3.9625 | 9.45 | 3800 | 4.0937 | 6.6653 | 0.0667 | 0.2912 | 0.0970 | 0.2020 | 38.4853 |
| 2.7172 | 9.7 | 3900 | 4.1130 | 6.1736 | 0.0617 | 0.2860 | 0.0898 | 0.1948 | 38.5064 |
| 2.4973 | 9.95 | 4000 | 4.0737 | 7.4889 | 0.0749 | 0.2986 | 0.1023 | 0.2060 | 39.2124 |
| 2.7371 | 10.2 | 4100 | 4.1032 | 6.4897 | 0.0649 | 0.2985 | 0.0990 | 0.2031 | 38.3514 |
| 3.9244 | 10.44 | 4200 | 4.0880 | 6.7268 | 0.0673 | 0.2906 | 0.1006 | 0.2012 | 38.6404 |
| 3.2153 | 10.69 | 4300 | 4.0961 | 6.7780 | 0.0678 | 0.2953 | 0.0977 | 0.2008 | 38.7091 |
| 3.0715 | 10.94 | 4400 | 4.1005 | 7.1435 | 0.0714 | 0.2870 | 0.0937 | 0.1950 | 38.5542 |
| 2.7833 | 11.19 | 4500 | 4.1112 | 7.5856 | 0.0759 | 0.3008 | 0.1037 | 0.2063 | 38.8659 |
| 5.6278 | 11.44 | 4600 | 4.0988 | 7.8870 | 0.0789 | 0.2962 | 0.1019 | 0.2025 | 38.8174 |
| 4.3557 | 11.69 | 4700 | 4.1049 | 7.9121 | 0.0791 | 0.3105 | 0.1076 | 0.2106 | 39.2476 |
| 3.4938 | 11.94 | 4800 | 4.1067 | 7.1602 | 0.0716 | 0.2961 | 0.1009 | 0.2039 | 38.9165 |
| 5.6848 | 12.19 | 4900 | 4.1140 | 7.8746 | 0.0787 | 0.2951 | 0.0996 | 0.2005 | 38.7719 |
| 3.4738 | 12.43 | 5000 | 4.0969 | 7.8672 | 0.0787 | 0.3055 | 0.1087 | 0.2092 | 39.0808 |
| 2.9039 | 12.68 | 5100 | 4.1185 | 7.6696 | 0.0767 | 0.3033 | 0.1071 | 0.2092 | 39.0788 |
| 4.4091 | 12.93 | 5200 | 4.1346 | 7.9896 | 0.0799 | 0.3014 | 0.1046 | 0.2070 | 39.2032 |
| 3.102 | 13.18 | 5300 | 4.1308 | 7.2969 | 0.0730 | 0.3030 | 0.1032 | 0.2039 | 39.1031 |
| 2.9972 | 13.43 | 5400 | 4.1518 | 7.7779 | 0.0778 | 0.3017 | 0.1053 | 0.2090 | 39.4092 |
| 2.7672 | 13.68 | 5500 | 4.1515 | 7.7545 | 0.0775 | 0.3010 | 0.1079 | 0.2091 | 39.0093 |
| 3.7358 | 13.93 | 5600 | 4.1360 | 7.5980 | 0.0760 | 0.2970 | 0.1036 | 0.2080 | 39.0873 |
| 3.4363 | 14.17 | 5700 | 4.1367 | 7.2901 | 0.0729 | 0.3013 | 0.1057 | 0.2084 | 39.3389 |
| 3.3451 | 14.42 | 5800 | 4.1500 | 7.5605 | 0.0756 | 0.2984 | 0.0979 | 0.2074 | 39.0107 |
| 2.8616 | 14.67 | 5900 | 4.1447 | 7.8204 | 0.0782 | 0.3020 | 0.1059 | 0.2127 | 39.7465 |
| 3.1149 | 14.92 | 6000 | 4.1506 | 7.6134 | 0.0761 | 0.3006 | 0.1038 | 0.2079 | 39.5909 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/LLaMa-7B-GGML | TheBloke | 2023-07-15T18:15:35Z | 90 | 71 | transformers | [
"transformers",
"llama",
"license:other",
"region:us"
] | null | 2023-05-17T12:59:21Z | ---
inference: false
license: other
model_type: llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's LLaMA 7b GGML
These files are GGML format model files for [Meta's LLaMA 7b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-7b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-7b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-7b)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB| 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB| 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB| 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB| 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
| llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB| 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB| 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB| 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB| 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's LLaMA 7b
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
Lounarisnia/lucyna-kushinada | Lounarisnia | 2023-07-15T17:55:41Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-15T17:29:05Z | ---
license: openrail
---
# Naming Guide
modelName-RVCVersion.ModelVersion.zip
# Info
Character Name: Lucyna Kushinada
Origin: Cyberpunk: Edgerunners
Epoch: 500
RVC: V2 |
sagarsdesai/PPO-LunarLander-v2 | sagarsdesai | 2023-07-15T17:50:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-14T19:11:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.79 +/- 13.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
FabbriSimo01/Bloom_1b_Quantized | FabbriSimo01 | 2023-07-15T17:44:29Z | 1,552 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | 2023-07-15T17:37:01Z | ---
license: bigscience-bloom-rail-1.0
---
|
TheBloke/LLaMa-30B-GGML | TheBloke | 2023-07-15T17:29:53Z | 17 | 24 | transformers | [
"transformers",
"llama",
"license:other",
"region:us"
] | null | 2023-05-17T12:59:42Z | ---
inference: false
license: other
model_type: llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's LLaMA 30b GGML
These files are GGML format model files for [Meta's LLaMA 30b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-30b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-30b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-30b)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-30b.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB| 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-30b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB| 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-30b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB| 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-30b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB| 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-30b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
| llama-30b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-30b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB| 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-30b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB| 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-30b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-30b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-30b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB| 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-30b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB| 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-30b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-30b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-30b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's LLaMA 30b
This contains the weights for the LLaMA-30b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
phatjk/bloomz-lora-vi-QA-NLLB-viquad_v3 | phatjk | 2023-07-15T17:12:55Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-15T17:12:48Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/LLaMa-13B-GGML | TheBloke | 2023-07-15T17:09:15Z | 27 | 19 | transformers | [
"transformers",
"llama",
"license:other",
"region:us"
] | null | 2023-05-17T12:59:31Z | ---
inference: false
license: other
model_type: llama
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Meta's LLaMA 13b GGML
These files are GGML format model files for [Meta's LLaMA 13b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-13b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-13b-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-13b)
## Prompt template: None
```
{prompt}
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| llama-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB| 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| llama-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB| 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB| 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| llama-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB| 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| llama-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| llama-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| llama-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB| 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| llama-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB| 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| llama-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| llama-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| llama-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB| 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| llama-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB| 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| llama-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| llama-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m llama-13b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Meta's LLaMA 13b
This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file).
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
steventrouble/EfficientZeroRemastered | steventrouble | 2023-07-15T17:05:31Z | 0 | 1 | null | [
"reinforcement-learning",
"arxiv:2111.00210",
"license:openrail",
"region:us"
] | reinforcement-learning | 2023-07-15T16:48:21Z | ---
license: openrail
pipeline_tag: reinforcement-learning
---
# EfficientZero Remastered
This repo contains the pre-trained models for the EfficientZero Remastered
project from Gigglebit Studios, a project to stabilize the training process
for the state of the art EfficientZero model.
* [Training source code](https://github.com/steventrouble/EfficientZero)
* [About the project](https://www.gigglebit.net/blog/efficientzero.html)
* [About EfficientZero](https://arxiv.org/abs/2111.00210)
* [About Gigglebit](https://www.gigglebit.net/)
Huge thanks to [Stability AI](https://stability.ai/) for providing the compute
for this project!
---
## How to use these files
Download the model that you want to test, then run test.py to test the model.
_Note: We've only productionized the training process. If you want to use these
for inference in production, you'll need to write your own inference logic.
If you do, send us a PR and we'll add it to the repo!_
Files are labeled as follows:
```
{gym_env}-s{seed}-e{env_steps}-t{train_steps}
```
Where:
* `gym_env`: The string ID of the gym environment this model was trained on.
E.g. Breakout-v5
* `seed`: The seed that was used to train this model. Usually 0.
* `env_steps`: The total number of steps in the environment that this model
observed, usually 100k.
* `train_steps`: The total number of training epochs the model underwent.
Note that `env_steps` can differ from `train_steps` because the model can
continue fine-tuning using its replay buffer. In the paper, the last 20k
epochs are done in this manner. This isn't necessary outside of benchmarks
and in theory better performance should be attainable by getting more samples
from the env.
---
## Findings
Our primary goal in this project was to test out EfficientZero and see its capabilities.
We were amazed by the model overall, especially on Breakout, where it far outperformed
the human baseline. The overall cost was only about $50 per fully trained model, compared
to the hundreds of thousands of dollars needed to train MuZero.
Though the trained models achieved impressive scores in Atari, they didn't reach the
stellar scores demonstrated in the paper. This could be because we used different hardware
and dependencies or because ML research papers tend to cherry-pick models and environments
to showcase good results.
Additionally, the models tended to hit a performance wall between 75-100k steps. While we
don't have enough data to know why or how often this happens, it's not surprising: the model
was tuned specifically for data efficiency, so it hasn't been tested at larger scales. A
model like MuZero might be more appropriate if you have a large budget.
Training times seemed longer than those reported in the EfficientZero paper. The paper
stated that they could train a model to completion in 7 hours, while in practice, we've found
that it takes an A100 with 32 cores between 1 to 2 days to train a model to completion. This
is likely because the training process uses more CPU than other models and therefore does not
perform well on the low-frequency, many-core CPUs found in GPU clusters. |
BrainTheos/whisper-tiny-ln-ojpl-2 | BrainTheos | 2023-07-15T16:57:22Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:BrainTheos/ojpl",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-15T15:32:49Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- BrainTheos/ojpl
metrics:
- wer
model-index:
- name: whisper-tiny-ln-ojpl-2
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: BrainTheos/ojpl
type: BrainTheos/ojpl
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.4351648351648352
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-ln-ojpl-2
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the BrainTheos/ojpl dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2661
- Wer Ortho: 50.1855
- Wer: 0.4352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.1767 | 11.36 | 500 | 0.9122 | 52.1142 | 0.4579 |
| 0.0191 | 22.73 | 1000 | 1.0786 | 53.7463 | 0.4538 |
| 0.0059 | 34.09 | 1500 | 1.1891 | 53.2641 | 0.4766 |
| 0.0019 | 45.45 | 2000 | 1.2661 | 50.1855 | 0.4352 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jeremyvictor/mt5-base-gramatika-final-e8-b16 | jeremyvictor | 2023-07-15T16:50:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-15T15:56:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-base-gramatika-final-e8-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-gramatika-final-e8-b16
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2117
- Rouge1: 66.7567
- Rouge2: 59.3343
- Rougel: 66.4993
- Rougelsum: 66.5275
- Gen Len: 18.5566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9122 | 0.37 | 300 | 0.3395 | 63.1315 | 53.1537 | 62.8285 | 62.8152 | 18.5833 |
| 0.4611 | 0.73 | 600 | 0.2870 | 64.8744 | 56.0545 | 64.604 | 64.6011 | 18.5676 |
| 0.3866 | 1.1 | 900 | 0.2690 | 65.2446 | 56.534 | 64.9389 | 64.9484 | 18.5414 |
| 0.2833 | 1.46 | 1200 | 0.2424 | 65.6718 | 57.2619 | 65.4044 | 65.4076 | 18.5566 |
| 0.2633 | 1.83 | 1500 | 0.2240 | 65.7057 | 57.6829 | 65.4464 | 65.4601 | 18.5524 |
| 0.2126 | 2.2 | 1800 | 0.2350 | 66.1634 | 58.4004 | 65.9254 | 65.9147 | 18.5582 |
| 0.1787 | 2.56 | 2100 | 0.2176 | 66.4508 | 58.8845 | 66.1886 | 66.199 | 18.5571 |
| 0.175 | 2.93 | 2400 | 0.2151 | 66.1987 | 58.632 | 65.9844 | 65.995 | 18.5603 |
| 0.1231 | 3.29 | 2700 | 0.2227 | 66.6365 | 59.1886 | 66.4067 | 66.4293 | 18.5571 |
| 0.1195 | 3.66 | 3000 | 0.2117 | 66.7567 | 59.3343 | 66.4993 | 66.5275 | 18.5566 |
| 0.1146 | 4.02 | 3300 | 0.2197 | 66.9385 | 59.8666 | 66.7575 | 66.7651 | 18.5556 |
| 0.0757 | 4.39 | 3600 | 0.2235 | 66.8918 | 59.768 | 66.7208 | 66.7282 | 18.5608 |
| 0.0772 | 4.76 | 3900 | 0.2270 | 67.0955 | 59.9474 | 66.8681 | 66.8905 | 18.5566 |
| 0.0688 | 5.12 | 4200 | 0.2431 | 67.2444 | 60.2703 | 67.0501 | 67.0676 | 18.5550 |
| 0.0512 | 5.49 | 4500 | 0.2439 | 67.198 | 60.2026 | 67.0128 | 67.0433 | 18.5535 |
| 0.0523 | 5.85 | 4800 | 0.2362 | 67.3463 | 60.4479 | 67.1385 | 67.1792 | 18.5592 |
| 0.0408 | 6.22 | 5100 | 0.2587 | 67.4973 | 60.7533 | 67.305 | 67.3418 | 18.5624 |
| 0.0324 | 6.59 | 5400 | 0.2502 | 67.6102 | 60.905 | 67.428 | 67.4547 | 18.5566 |
| 0.0336 | 6.95 | 5700 | 0.2583 | 67.531 | 60.7718 | 67.355 | 67.3762 | 18.5587 |
| 0.0236 | 7.32 | 6000 | 0.2710 | 67.5641 | 60.7633 | 67.3445 | 67.3835 | 18.5603 |
| 0.0222 | 7.68 | 6300 | 0.2729 | 67.5898 | 60.8587 | 67.3926 | 67.4234 | 18.5608 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lrthomps/rl_course_vizdoom_health_gathering_supreme | lrthomps | 2023-07-15T16:48:42Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T16:48:35Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.66 +/- 4.53
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r lrthomps/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
0sunfire0/ppo-PyramidsTraining_00 | 0sunfire0 | 2023-07-15T16:36:05Z | 20 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-15T16:36:02Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: 0sunfire0/ppo-PyramidsTraining_00
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
chainsurfer/q-FrozenLake-v1-4x4-noSlippery | chainsurfer | 2023-07-15T16:23:11Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T16:23:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chainsurfer/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NotAgain0/ppo-LunarLander-v2 | NotAgain0 | 2023-07-15T16:21:41Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T16:21:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -166.20 +/- 21.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jinouga/yamanaka-ino-realistic-v1 | Jinouga | 2023-07-15T16:12:30Z | 5 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T16:06:41Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### yamanaka-ino-realistic-v1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
digiplay/Remedy | digiplay | 2023-07-15T16:04:57Z | 320 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T00:58:04Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/87025
Original Author's DEMO images :




Sample image I made thru Huggingface's API :

|
NasimB/children-rarity-all-guten-rarity-all-2p5k | NasimB | 2023-07-15T16:00:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T14:01:34Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: children-rarity-all-guten-rarity-all-2p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# children-rarity-all-guten-rarity-all-2p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7039 | 0.29 | 500 | 5.6480 |
| 5.3358 | 0.59 | 1000 | 5.2080 |
| 4.9956 | 0.88 | 1500 | 4.9573 |
| 4.7225 | 1.17 | 2000 | 4.8060 |
| 4.5557 | 1.47 | 2500 | 4.6798 |
| 4.4478 | 1.76 | 3000 | 4.5744 |
| 4.3246 | 2.05 | 3500 | 4.4978 |
| 4.133 | 2.35 | 4000 | 4.4463 |
| 4.107 | 2.64 | 4500 | 4.3935 |
| 4.0654 | 2.93 | 5000 | 4.3409 |
| 3.8576 | 3.23 | 5500 | 4.3368 |
| 3.8053 | 3.52 | 6000 | 4.3112 |
| 3.7871 | 3.81 | 6500 | 4.2678 |
| 3.6811 | 4.11 | 7000 | 4.2724 |
| 3.5209 | 4.4 | 7500 | 4.2658 |
| 3.5172 | 4.69 | 8000 | 4.2488 |
| 3.4981 | 4.99 | 8500 | 4.2384 |
| 3.3366 | 5.28 | 9000 | 4.2518 |
| 3.3255 | 5.57 | 9500 | 4.2501 |
| 3.3248 | 5.87 | 10000 | 4.2492 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lrthomps/LunarLander-v2 | lrthomps | 2023-07-15T15:59:35Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T15:52:54Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -176.92 +/- 108.43
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'lrthomps/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
oknashar/my_awesome_eli5_clm-model | oknashar | 2023-07-15T15:55:46Z | 56 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T15:46:04Z | ---
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [aubmindlab/bert-base-arabert](https://huggingface.co/aubmindlab/bert-base-arabert) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cpu
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Python/ACROSS-m2o-eng-small | Python | 2023-07-15T15:53:27Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-14T10:19:10Z | # ACROSS-m2o-eng-small
## How to use
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer
model = MT5ForConditionalGeneration.from_pretrained('Python/ACROSS-m2o-eng-small')
tokenizer = AutoTokenizer.from_pretrained('Python/ACROSS-m2o-eng-small', use_fast=False)
input_text = '冈山县的倉敷市整个泡在泥水之中,数千户人家停水停电 这是日本近30多年来因为降雨而造成的死亡人数最多的一次水灾。究竟为何如此严重?仍然是每个人心中的疑问。 日本一向被视为是“防灾强国”,日本人对地震、台风、海啸等自然灾难绝对不陌生。 但这次暴雨引发水灾和土石流,竟然出现如此惊人的天灾死亡人数,也令许多人感到震惊。 短短几日的降雨量达到整个7月正常降雨量的三倍之多 超大降雨 究其原因,首先是短时间之内的超大降雨。 日本气象厅上周对西日本多个地方发布“大雨特别警报”,警告西部地方会受到“数十年一遇”的豪大雨,结果一共有93个观测站录得史上雨量第一的纪录。 从上周四开始的短短几日之内,日本西部地区多个地方的降雨量达到整个7月正常降雨量的三倍之多。 日本此次降雨多个地方超过上千毫米,日本气象厅也将这次豪雨正式命名为“平成30年7月豪雨”。 一共有7万多人参与救灾工作 河川溃堤 此外,超大豪雨超过河川疏洪承受度,短时间涌入巨大水量造成河川溃堤,沿岸市镇整个泡在泥水之中。 日本《每日新闻》报道说,冈山县的小田川溃堤,至少4600户都被洪水淹没,许多长者逃生不及淹死在自己家中。 暴雨过后被毁坏的家园 回水现象 据《产经新闻》报导,冈山县仓敷市真备町内的高梁川各支流共有5处溃堤,是因为大雨让河川主流水位上升,导致原本要和主流汇集的的支流无法流入,因此溃堤淹没附近区域,这样的状况被称之为“回水现象”。 有专家指出,“回水现象”也是这次豪雨水灾如此严重的原因之一。 救难人员抓紧时间在土石堆和残垣断壁下搜寻抢救生还者 山体滑坡 除了超大豪雨之外,日本地形多山,还有板块和花岗岩地质层,不少民宅都建筑在山坡地,一旦遇上大雨容易发生山体滑坡现象。 《日本经济新闻》报道说,这次日本暴雨灾难,多个地方发生大规模山体滑坡灾害,导致遇难人数增加。 受灾区的15个县有大约12000人安置到学校和体育馆等避难中心 该报引述京都大学防灾研究所的应用地质学教授千木良雅弘分析说,灾区是花岗岩的分布地区,其表层由“风化花岗岩”砂土覆盖,一旦降雨,表层滑坡就成为土石流,涌入住宅区。 专家也指出,表层滑坡导致的灾害近年来频频发生,原因多半是局部性暴雨所导致,需要检讨是否要在可能发生表层滑坡的地区建设住宅。'
inputs = tokenizer(input_text, max_length=512, truncation=True, return_tensors='pt')
generate_ids = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
num_beams=5,
min_length=10,
length_penalty=0.8,
max_length=84
)
print(tokenizer.decode(generate_ids[0], skip_special_tokens=True))
``` |
ptah23/whisper-small-af-ZA | ptah23 | 2023-07-15T15:45:46Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:google/fleurs",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-14T22:45:53Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-small-af-ZA
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs
type: google/fleurs
config: af_za
split: train+validation
args: af_za
metrics:
- name: Wer
type: wer
value: 0.36644093303235514
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-af-ZA
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5728
- Wer: 0.3664
- Wer Ortho: 0.3943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 5
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Wer Ortho |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|
| 0.7731 | 1.45 | 100 | 0.7280 | 0.3740 | 0.3863 |
| 0.2103 | 2.9 | 200 | 0.5116 | 0.3661 | 0.3859 |
| 0.0633 | 4.35 | 300 | 0.4967 | 0.2810 | 0.3008 |
| 0.0249 | 5.8 | 400 | 0.5003 | 0.3299 | 0.3477 |
| 0.0143 | 7.25 | 500 | 0.5191 | 0.3510 | 0.3660 |
| 0.0053 | 8.7 | 600 | 0.5149 | 0.3070 | 0.3221 |
| 0.0035 | 10.14 | 700 | 0.5345 | 0.3266 | 0.3443 |
| 0.0027 | 11.59 | 800 | 0.5339 | 0.3175 | 0.3344 |
| 0.0026 | 13.04 | 900 | 0.5435 | 0.3134 | 0.3328 |
| 0.0037 | 14.49 | 1000 | 0.5346 | 0.2506 | 0.2714 |
| 0.0045 | 15.94 | 1100 | 0.5438 | 0.3220 | 0.3389 |
| 0.0028 | 17.39 | 1200 | 0.5588 | 0.2551 | 0.2740 |
| 0.0036 | 18.84 | 1300 | 0.5466 | 0.2728 | 0.2702 |
| 0.0035 | 20.29 | 1400 | 0.5364 | 0.3119 | 0.3332 |
| 0.0056 | 21.74 | 1500 | 0.5608 | 0.2506 | 0.2721 |
| 0.0037 | 23.19 | 1600 | 0.5443 | 0.2833 | 0.3027 |
| 0.0035 | 24.64 | 1700 | 0.5466 | 0.3631 | 0.3866 |
| 0.0024 | 26.09 | 1800 | 0.5628 | 0.3198 | 0.3416 |
| 0.0036 | 27.54 | 1900 | 0.5495 | 0.2946 | 0.3122 |
| 0.0016 | 28.99 | 2000 | 0.5728 | 0.3664 | 0.3943 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
efainman/q-Taxi-v3 | efainman | 2023-07-15T15:43:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T15:43:53Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="efainman/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
efainman/q-FrozenLake-v1-4x4-noSlippery | efainman | 2023-07-15T15:39:51Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T15:39:48Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="efainman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
roborovski/phi-2-classifier | roborovski | 2023-07-15T15:37:43Z | 28 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-12T00:07:23Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: phi-2-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-classifier
This model is a fine-tuned version of [bigcode/starencoder](https://huggingface.co/bigcode/starencoder) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4538
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3098 | 1.0 | 1485 | 0.3670 | 0.89 |
| 0.4251 | 2.0 | 2970 | 0.3698 | 0.88 |
| 0.226 | 3.0 | 4455 | 0.4538 | 0.875 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pszemraj/BERTopic-booksum-ngram1-sentence-t5-xl-chapter | pszemraj | 2023-07-15T15:33:56Z | 4 | 0 | bertopic | [
"bertopic",
"text-classification",
"en",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-06-26T13:45:57Z | ---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
license: apache-2.0
datasets:
- kmfoda/booksum
language:
- en
inference: False
---
# BERTopic-booksum-ngram1-sentence-t5-xl-chapter
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic safetensors
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("pszemraj/BERTopic-booksum-ngram1-sentence-t5-xl-chapter")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 138
* Number of training documents: 70840
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | were - her - was - had - she | 30 | -1_were_her_was_had |
| 0 | were - had - was - could - miss | 28715 | 0_were_had_was_could |
| 1 | artagnan - athos - musketeers - porthos - treville | 16916 | 1_artagnan_athos_musketeers_porthos |
| 2 | rama - ravan - brahma - lakshman - raghu | 4563 | 2_rama_ravan_brahma_lakshman |
| 3 | were - canoe - hist - huron - hutter | 1268 | 3_were_canoe_hist_huron |
| 4 | slave - were - slavery - had - was | 1011 | 4_slave_were_slavery_had |
| 5 | holmes - sherlock - watson - moor - baskerville | 580 | 5_holmes_sherlock_watson_moor |
| 6 | prisoner - milady - felton - were - madame | 549 | 6_prisoner_milady_felton_were |
| 7 | coriolanus - cassius - brutus - sicinius - titus | 527 | 7_coriolanus_cassius_brutus_sicinius |
| 8 | confederation - constitution - federal - states - senate | 511 | 8_confederation_constitution_federal_states |
| 9 | heathcliff - catherine - wuthering - cathy - hindley | 498 | 9_heathcliff_catherine_wuthering_cathy |
| 10 | were - seemed - rima - was - had | 492 | 10_were_seemed_rima_was |
| 11 | laws - lawes - law - civill - actions | 452 | 11_laws_lawes_law_civill |
| 12 | fang - wolf - fangs - musher - growl | 401 | 12_fang_wolf_fangs_musher |
| 13 | sigurd - thorgeir - thord - gunnar - skarphedinn | 395 | 13_sigurd_thorgeir_thord_gunnar |
| 14 | achilles - troy - patroclus - aeneas - ulysses | 385 | 14_achilles_troy_patroclus_aeneas |
| 15 | fogg - passengers - passed - phileas - travellers | 376 | 15_fogg_passengers_passed_phileas |
| 16 | troy - trojans - aeneas - fates - trojan | 370 | 16_troy_trojans_aeneas_fates |
| 17 | disciples - jesus - pharisees - temple - jerusalem | 340 | 17_disciples_jesus_pharisees_temple |
| 18 | helsing - harker - diary - dr - he | 324 | 18_helsing_harker_diary_dr |
| 19 | lama - who - no - kim - am | 312 | 19_lama_who_no_kim |
| 20 | sara - princess - herself - she - minchin | 301 | 20_sara_princess_herself_she |
| 21 | horses - horse - saddle - stable - were | 293 | 21_horses_horse_saddle_stable |
| 22 | hester - pearl - scarlet - her - human | 292 | 22_hester_pearl_scarlet_her |
| 23 | candide - inquisitor - friar - cunegonde - philosopher | 286 | 23_candide_inquisitor_friar_cunegonde |
| 24 | dick - aunt - were - could - had | 275 | 24_dick_aunt_were_could |
| 25 | wolves - wolf - cub - hunger - were | 261 | 25_wolves_wolf_cub_hunger |
| 26 | god - gods - consequences - satan - som | 241 | 26_god_gods_consequences_satan |
| 27 | modesty - women - behaviour - human - woman | 240 | 27_modesty_women_behaviour_human |
| 28 | society - education - distribution - service - labour | 240 | 28_society_education_distribution_service |
| 29 | siddhartha - buddha - gotama - kamaswami - om | 237 | 29_siddhartha_buddha_gotama_kamaswami |
| 30 | ship - captain - aboard - squire - ll | 229 | 30_ship_captain_aboard_squire |
| 31 | cyrano - roxane - montfleury - hark - love | 227 | 31_cyrano_roxane_montfleury_hark |
| 32 | alice - were - rabbit - hare - hatter | 225 | 32_alice_were_rabbit_hare |
| 33 | toto - kansas - dorothy - oz - scarecrow | 211 | 33_toto_kansas_dorothy_oz |
| 34 | lancelot - camelot - merlin - guinevere - arthur | 209 | 34_lancelot_camelot_merlin_guinevere |
| 35 | were - soldiers - seemed - soldier - th | 201 | 35_were_soldiers_seemed_soldier |
| 36 | were - was - fields - seemed - hills | 200 | 36_were_was_fields_seemed |
| 37 | reason - thyself - actions - thine - life | 179 | 37_reason_thyself_actions_thine |
| 38 | hetty - her - she - judith - were | 170 | 38_hetty_her_she_judith |
| 39 | othello - iago - desdemona - ll - roderigo | 170 | 39_othello_iago_desdemona_ll |
| 40 | wildeve - yes - were - vye - was | 165 | 40_wildeve_yes_were_vye |
| 41 | utilitarian - morality - morals - virtue - moral | 165 | 41_utilitarian_morality_morals_virtue |
| 42 | ransom - isaac - thine - thy - shekels | 163 | 42_ransom_isaac_thine_thy |
| 43 | weasels - rat - ratty - toad - badger | 157 | 43_weasels_rat_ratty_toad |
| 44 | philip - he - were - vicar - was | 155 | 44_philip_he_were_vicar |
| 45 | macbeth - banquo - macduff - fleance - murderer | 154 | 45_macbeth_banquo_macduff_fleance |
| 46 | lydgate - bulstrode - himself - he - had | 145 | 46_lydgate_bulstrode_himself_he |
| 47 | capulet - romeo - juliet - verona - mercutio | 142 | 47_capulet_romeo_juliet_verona |
| 48 | dying - her - were - helen - she | 141 | 48_dying_her_were_helen |
| 49 | anne - avonlea - diana - her - marilla | 141 | 49_anne_avonlea_diana_her |
| 50 | tartuffe - scene - dorine - pernelle - scoundrel | 140 | 50_tartuffe_scene_dorine_pernelle |
| 51 | were - yes - had - was - no | 139 | 51_were_yes_had_was |
| 52 | jekyll - hyde - were - myself - had | 135 | 52_jekyll_hyde_were_myself |
| 53 | loved - were - philip - was - could | 128 | 53_loved_were_philip_was |
| 54 | falstaff - mistress - ford - forsooth - windsor | 127 | 54_falstaff_mistress_ford_forsooth |
| 55 | hurstwood - were - barn - had - was | 127 | 55_hurstwood_were_barn_had |
| 56 | provost - capell - collier - conj - pope | 126 | 56_provost_capell_collier_conj |
| 57 | gretchen - highness - chancellor - hildegarde - yes | 125 | 57_gretchen_highness_chancellor_hildegarde |
| 58 | delamere - watson - dr - ll - no | 124 | 58_delamere_watson_dr_ll |
| 59 | jem - her - were - felt - margaret | 123 | 59_jem_her_were_felt |
| 60 | beowulf - grendel - hrothgar - wiglaf - hero | 111 | 60_beowulf_grendel_hrothgar_wiglaf |
| 61 | verloc - seemed - was - were - had | 102 | 61_verloc_seemed_was_were |
| 62 | hamlet - guildenstern - rosencrantz - fortinbras - polonius | 102 | 62_hamlet_guildenstern_rosencrantz_fortinbras |
| 63 | corey - mrs - yes - business - lapham | 101 | 63_corey_mrs_yes_business |
| 64 | projectiles - cannon - projectile - distance - satellite | 99 | 64_projectiles_cannon_projectile_distance |
| 65 | piano - musical - music - played - beethoven | 98 | 65_piano_musical_music_played |
| 66 | wedding - bridegroom - were - marriage - looked | 93 | 66_wedding_bridegroom_were_marriage |
| 67 | juan - her - fame - some - had | 92 | 67_juan_her_fame_some |
| 68 | were - looked - felt - her - had | 91 | 68_were_looked_felt_her |
| 69 | staked - gambling - wildeve - stakes - dice | 91 | 69_staked_gambling_wildeve_stakes |
| 70 | mistress - leonora - wanted - florence - was | 89 | 70_mistress_leonora_wanted_florence |
| 71 | delano - ship - sailor - captain - benito | 87 | 71_delano_ship_sailor_captain |
| 72 | yes - goring - no - robert - room | 85 | 72_yes_goring_no_robert |
| 73 | stockmann - yes - horster - mayor - dr | 81 | 73_stockmann_yes_horster_mayor |
| 74 | ll - were - looked - carl - was | 80 | 74_ll_were_looked_carl |
| 75 | barber - philosophy - no - some - man | 78 | 75_barber_philosophy_no_some |
| 76 | tom - maggie - came - had - tulliver | 78 | 76_tom_maggie_came_had |
| 77 | middlemarch - hustings - candidate - brooke - may | 75 | 77_middlemarch_hustings_candidate_brooke |
| 78 | inspector - verloc - yes - affair - police | 75 | 78_inspector_verloc_yes_affair |
| 79 | scrooge - merry - no - christmas - man | 73 | 79_scrooge_merry_no_christmas |
| 80 | coquenard - mutton - served - were - pudding | 70 | 80_coquenard_mutton_served_were |
| 81 | yes - no - jack - ll - tell | 69 | 81_yes_no_jack_ll |
| 82 | seth - lisbeth - th - ud - no | 67 | 82_seth_lisbeth_th_ud |
| 83 | higgins - eliza - her - she - liza | 66 | 83_higgins_eliza_her_she |
| 84 | yarmouth - were - went - had - was | 65 | 84_yarmouth_were_went_had |
| 85 | servian - sergius - yes - catherine - no | 64 | 85_servian_sergius_yes_catherine |
| 86 | service - army - salvation - institution - training | 61 | 86_service_army_salvation_institution |
| 87 | condemn - ff - pray - mercy - conj | 58 | 87_condemn_ff_pray_mercy |
| 88 | lucy - bartlett - were - could - she | 57 | 88_lucy_bartlett_were_could |
| 89 | wills - seemed - bequest - were - testator | 54 | 89_wills_seemed_bequest_were |
| 90 | scene - iii - malvolio - valentine - cesario | 54 | 90_scene_iii_malvolio_valentine |
| 91 | fuss - think - ll - thinks - oh | 53 | 91_fuss_think_ll_thinks |
| 92 | hermia - demetrius - helena - theseus - helen | 50 | 92_hermia_demetrius_helena_theseus |
| 93 | seemed - rochester - were - had - yes | 50 | 93_seemed_rochester_were_had |
| 94 | sorrow - mourned - myself - had - was | 48 | 94_sorrow_mourned_myself_had |
| 95 | gerty - sleepless - tea - weariness - tired | 48 | 95_gerty_sleepless_tea_weariness |
| 96 | rushworth - crawford - were - sotherton - was | 47 | 96_rushworth_crawford_were_sotherton |
| 97 | reasoning - syllogisme - names - signification - definitions | 46 | 97_reasoning_syllogisme_names_signification |
| 98 | could - caleb - sure - work - no | 46 | 98_could_caleb_sure_work |
| 99 | rose - tears - hope - tell - wish | 46 | 99_rose_tears_hope_tell |
| 100 | peggotty - em - gummidge - he - ll | 46 | 100_peggotty_em_gummidge_he |
| 101 | time - future - story - paradox - traveller | 46 | 101_time_future_story_paradox |
| 102 | cleopatra - antony - caesar - loved - slave | 45 | 102_cleopatra_antony_caesar_loved |
| 103 | appendicitis - doctors - doctor - dr - wanted | 45 | 103_appendicitis_doctors_doctor_dr |
| 104 | slept - awoke - waking - sleep - seemed | 44 | 104_slept_awoke_waking_sleep |
| 105 | parlour - room - seemed - sat - had | 43 | 105_parlour_room_seemed_sat |
| 106 | prophets - scripture - prophet - moses - prophecy | 43 | 106_prophets_scripture_prophet_moses |
| 107 | letter - honour - adieu - duval - evelina | 43 | 107_letter_honour_adieu_duval |
| 108 | complications - cranky - had - tanis - was | 43 | 108_complications_cranky_had_tanis |
| 109 | fled - armies - brussels - imperial - napoleon | 42 | 109_fled_armies_brussels_imperial |
| 110 | philip - easel - greco - impressionists - manet | 42 | 110_philip_easel_greco_impressionists |
| 111 | harlings - harling - frances - were - shimerdas | 40 | 111_harlings_harling_frances_were |
| 112 | jane - mrs - janet - eyre - her | 40 | 112_jane_mrs_janet_eyre |
| 113 | prisoner - confinement - prisoners - prison - gaoler | 40 | 113_prisoner_confinement_prisoners_prison |
| 114 | hardcastle - marlow - impudence - constance - modesty | 40 | 114_hardcastle_marlow_impudence_constance |
| 115 | horatio - murder - revenge - sorrow - hieronimo | 40 | 115_horatio_murder_revenge_sorrow |
| 116 | traddles - had - married - room - horace | 39 | 116_traddles_had_married_room |
| 117 | philip - tell - feelings - was - remember | 38 | 117_philip_tell_feelings_was |
| 118 | nervous - countenance - seemed - he - huxtable | 38 | 118_nervous_countenance_seemed_he |
| 119 | rogers - wanted - lapham - could - silas | 38 | 119_rogers_wanted_lapham_could |
| 120 | titus - timon - varro - servilius - alcibiades | 37 | 120_titus_timon_varro_servilius |
| 121 | morality - justice - moral - impartiality - unjust | 37 | 121_morality_justice_moral_impartiality |
| 122 | willard - elmer - were - was - henderson | 37 | 122_willard_elmer_were_was |
| 123 | had - was - could - circumstances - possession | 37 | 123_had_was_could_circumstances |
| 124 | monkey - he - sahib - rat - sara | 36 | 124_monkey_he_sahib_rat |
| 125 | mcmurdo - mcginty - cormac - police - scanlan | 36 | 125_mcmurdo_mcginty_cormac_police |
| 126 | hetty - herself - she - her - had | 36 | 126_hetty_herself_she_her |
| 127 | dimmesdale - reverend - chillingworth - clergyman - deacon | 35 | 127_dimmesdale_reverend_chillingworth_clergyman |
| 128 | formerly - eliza - was - friend - friends | 34 | 128_formerly_eliza_was_friend |
| 129 | were - seemed - had - was - felt | 34 | 129_were_seemed_had_was |
| 130 | prisoner - jerry - lorry - tellson - court | 33 | 130_prisoner_jerry_lorry_tellson |
| 131 | macmurdo - wenham - captain - steyne - crawley | 33 | 131_macmurdo_wenham_captain_steyne |
| 132 | ducal - duchy - xv - fetes - theatre | 32 | 132_ducal_duchy_xv_fetes |
| 133 | chapter - book - dows - unt - windowpane | 32 | 133_chapter_book_dows_unt |
| 134 | money - riches - things - risk - thoughts | 31 | 134_money_riches_things_risk |
| 135 | bethy - beth - seemed - sister - her | 31 | 135_bethy_beth_seemed_sister |
| 136 | oliver - pickwick - were - was - inn | 30 | 136_oliver_pickwick_were_was |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 30
* n_gram_range: (1, 1)
* nr_topics: auto
* seed_topic_list: None
* top_n_words: 10
* verbose: True
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 2.0.2
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.57.1
* Plotly: 5.15.0
* Python: 3.10.11 |
manuu01/ppo-LunarLander-v2 | manuu01 | 2023-07-15T15:29:39Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T15:29:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.39 +/- 15.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mistermango24/yiffymix_3.1V | Mistermango24 | 2023-07-15T15:05:10Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T14:58:42Z | ---
license: creativeml-openrail-m
---
|
csikasote/wav2vec2-large-mms-1b-nya-colab | csikasote | 2023-07-15T15:01:39Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-15T12:28:29Z | ---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-nya-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-nya-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4327
- Wer: 0.3505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1659 | 0.2 | 200 | 0.6822 | 0.5353 |
| 0.2331 | 0.39 | 400 | 0.5220 | 0.4493 |
| 0.2119 | 0.59 | 600 | 0.4967 | 0.4146 |
| 0.1995 | 0.79 | 800 | 0.5021 | 0.4025 |
| 0.1812 | 0.99 | 1000 | 0.5046 | 0.3979 |
| 0.1744 | 1.18 | 1200 | 0.4786 | 0.3884 |
| 0.1783 | 1.38 | 1400 | 0.4630 | 0.3786 |
| 0.1663 | 1.58 | 1600 | 0.4511 | 0.3634 |
| 0.1609 | 1.77 | 1800 | 0.4656 | 0.3647 |
| 0.1632 | 1.97 | 2000 | 0.4254 | 0.3553 |
| 0.1568 | 2.17 | 2200 | 0.4326 | 0.3529 |
| 0.1544 | 2.37 | 2400 | 0.4291 | 0.3477 |
| 0.1524 | 2.56 | 2600 | 0.4327 | 0.3505 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zjkarina/LaBSE-instructDialogs | zjkarina | 2023-07-15T14:30:56Z | 47 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"pretraining",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-02T22:07:50Z | ---
language:
- en
---
[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) pre-trained on an instructional question-and-answer dataset. Evaluated on **Precision at K** metrics and **Mean reciprocal rank**.
Precision at K is a simple metric to understand and implement, but it has an important disadvantage - it does not take into account the order of elements in the "top". So, if we guessed only one item out of ten, it doesn't matter whether it was on the first or the last place - inline_formula in any case. It is obvious that the first variant is much better.
ean reciprocal rank equal to the reverse rank of the first correctly guessed item. Mean reciprocal rank varies in the range [0,1] and takes into account the position of items. Unfortunately, it does this only for one item - the 1st correctly predicted item, ignoring all subsequent items.
Evaluation results:
```python
p@1: 52 %
p@3: 66 %
p@5: 73 %
p@10: 79 %
p@15: 82 %
MRR: 62 %
```
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("zjkarina/LaBSE-instructDialogs")
model = AutoModel.from_pretrained("zjkarina/LaBSE-instructDialogs")
sentences = ["List 5 reasons why someone should learn to code", "Describe the sound of the wind on a sunny day."]
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = model_output.pooler_output
embeddings = torch.nn.functional.normalize(embeddings)
print(embeddings)
``` |
BigArt/open_llama_7b_v2_orca_lora_233 | BigArt | 2023-07-15T14:26:24Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-15T14:26:17Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
flax-community/bert-swahili-news-classification | flax-community | 2023-07-15T14:21:05Z | 325 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"sw",
"dataset:flax-community/swahili-safi",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: sw
widget:
- text: "Idris ameandika kwenye ukurasa wake wa Instagram akimkumbusha Diamond kutekeleza ahadi yake kumpigia Zari magoti kumuomba msamaha kama alivyowahi kueleza awali.Idris ameandika;"
datasets:
- flax-community/swahili-safi
---
## Swahili News Classification with BERT
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by [HuggingFace](https://huggingface.co). All training was done on a TPUv3-8 VM sponsored by the Google Cloud team.
This [model](https://huggingface.co/flax-community/bert-base-uncased-swahili) was used as the base and fine-tuned for this task.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-swahili-news-classification")
model = AutoModelForSequenceClassification.from_pretrained("flax-community/bert-swahili-news-classification")
```
```
Eval metrics (10% valid set): {'accuracy': 0.9114740008594757}
```
|
fgaim/tiroberta-base | fgaim | 2023-07-15T14:20:28Z | 123 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"ti",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
language: ti
widget:
- text: "ዓቕሚ መንእሰይ ኤርትራ <mask> ተራእዩ"
---
# TiRoBERTa: RoBERTa Pretrained for the Tigrinya Language
We pretrain a RoBERTa base model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
Contained in this repo is the original pretrained Flax model that was trained on a TPU v3.8 and it's corresponding PyTorch version.
## Hyperparameters
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P | Seq |
|------------|----|----|-----|------|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M | 512 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021 at EMNLP 2021}
}
```
|
RottenLemons/flan-t5-base-downsamples | RottenLemons | 2023-07-15T14:15:47Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-15T03:18:42Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-downsamples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-downsamples
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0041
- F1: 99.1107
- Gen Len: 2.0630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mrizalf7/xlm-r-qa-squad1.1-squad2.0-tf-3 | mrizalf7 | 2023-07-15T14:12:50Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-15T12:38:20Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad1.1-squad2.0-tf-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad1.1-squad2.0-tf-3
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9462 | 1.0 | 5437 | 2.0332 |
| 0.6874 | 2.0 | 10874 | 2.2377 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/children_rarity_all_bnc_rarity | NasimB | 2023-07-15T13:54:27Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T11:52:38Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: children_rarity_all_bnc_rarity
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# children_rarity_all_bnc_rarity
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7137 | 0.29 | 500 | 5.6432 |
| 5.3378 | 0.59 | 1000 | 5.2047 |
| 5.0051 | 0.88 | 1500 | 4.9540 |
| 4.7284 | 1.17 | 2000 | 4.8094 |
| 4.565 | 1.46 | 2500 | 4.6819 |
| 4.461 | 1.76 | 3000 | 4.5795 |
| 4.3297 | 2.05 | 3500 | 4.5018 |
| 4.136 | 2.34 | 4000 | 4.4570 |
| 4.1172 | 2.63 | 4500 | 4.3968 |
| 4.0704 | 2.93 | 5000 | 4.3422 |
| 3.8703 | 3.22 | 5500 | 4.3401 |
| 3.8155 | 3.51 | 6000 | 4.3116 |
| 3.79 | 3.8 | 6500 | 4.2753 |
| 3.6938 | 4.1 | 7000 | 4.2763 |
| 3.5269 | 4.39 | 7500 | 4.2704 |
| 3.5216 | 4.68 | 8000 | 4.2539 |
| 3.5087 | 4.97 | 8500 | 4.2424 |
| 3.3411 | 5.27 | 9000 | 4.2559 |
| 3.3301 | 5.56 | 9500 | 4.2550 |
| 3.3351 | 5.85 | 10000 | 4.2540 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
NasimB/guten_rarity_all_iorder_cut_19k | NasimB | 2023-07-15T13:35:16Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T11:35:27Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten_rarity_all_iorder_cut_19k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten_rarity_all_iorder_cut_19k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6999 | 0.29 | 500 | 5.6379 |
| 5.3438 | 0.59 | 1000 | 5.2092 |
| 4.9957 | 0.88 | 1500 | 4.9584 |
| 4.7227 | 1.18 | 2000 | 4.8042 |
| 4.5678 | 1.47 | 2500 | 4.6893 |
| 4.4552 | 1.77 | 3000 | 4.5846 |
| 4.3223 | 2.06 | 3500 | 4.5046 |
| 4.1416 | 2.36 | 4000 | 4.4568 |
| 4.1109 | 2.65 | 4500 | 4.3937 |
| 4.0736 | 2.95 | 5000 | 4.3412 |
| 3.8588 | 3.24 | 5500 | 4.3404 |
| 3.8098 | 3.54 | 6000 | 4.3098 |
| 3.7959 | 3.83 | 6500 | 4.2759 |
| 3.6657 | 4.12 | 7000 | 4.2758 |
| 3.5345 | 4.42 | 7500 | 4.2669 |
| 3.5164 | 4.71 | 8000 | 4.2560 |
| 3.504 | 5.01 | 8500 | 4.2472 |
| 3.3339 | 5.3 | 9000 | 4.2570 |
| 3.332 | 5.6 | 9500 | 4.2555 |
| 3.3329 | 5.89 | 10000 | 4.2549 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
pelinbalci/flant5-dialoguesum | pelinbalci | 2023-07-15T13:34:48Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-15T11:48:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: flant5-dialoguesum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flant5-dialoguesum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
digiplay/Dusk-1 | digiplay | 2023-07-15T13:32:12Z | 1,258 | 4 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-14T08:03:25Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/108759/dusk
Sample image I made thru Huggingface's API :

Original Author's DEMO images :
 |
TheBloke/WizardCoder-Guanaco-15B-V1.1-GGML | TheBloke | 2023-07-15T13:29:15Z | 0 | 30 | null | [
"en",
"dataset:guanaco",
"license:apache-2.0",
"region:us"
] | null | 2023-07-15T13:03:14Z | ---
datasets:
- guanaco
inference: false
language:
- en
license:
- apache-2.0
model_hub_library:
- transformers
model_type: starcoder
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# LoupGarou's WizardCoder-Guanaco-15B-V1.1 GGML
These files are StarCoder GGML format model files for [LoupGarou's WizardCoder-Guanaco-15B-V1.1](https://huggingface.co/LoupGarou/WizardCoder-Guanaco-15B-V1.1).
Please note that these GGMLs are **not compatible with llama.cpp, text-generation-webui or llama-cpp-python**. Please see below for a list of tools that work with this GGML model.
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardCoder-Guanaco-15B-V1.1-GPTQ)
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-Guanaco-15B-V1.1-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LoupGarou/WizardCoder-Guanaco-15B-V1.1)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibilty
These files are **not** compatible with llama.cpp, text-generation-webui or llama-cpp-python.
They can be used with:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful inference engine based on llama.cpp with full GPU acceleration and good UI.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI for GGML inference on Windows and macOS.
* [LoLLMs-WebUI](https://github.com/ParisNeo/LoLLMs-WebUI) a web UI which supports nearly every backend out there. Use ctransformers backend for support for this model.
* [ctransformers](https://github.com/marella/ctransformers): for use in Python code, including LangChain support.
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `starcoder` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using LoLLMs-WebUI:
* [Video tutorial, by LoLLMs-WebUI's author **ParisNeo**](https://youtu.be/vBU1b5n0GMU)
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| wizardcoder-guanaco-15b-v1.1.ggmlv1.q4_0.bin | q4_0 | 4 | 10.75 GB| 13.25 GB | 4-bit. |
| wizardcoder-guanaco-15b-v1.1.ggmlv1.q4_1.bin | q4_1 | 4 | 11.92 GB| 14.42 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| wizardcoder-guanaco-15b-v1.1.ggmlv1.q5_0.bin | q5_0 | 5 | 13.09 GB| 15.59 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
| wizardcoder-guanaco-15b-v1.1.ggmlv1.q5_1.bin | q5_1 | 5 | 14.26 GB| 16.76 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
| wizardcoder-guanaco-15b-v1.1.ggmlv1.q8_0.bin | q8_0 | 8 | 20.11 GB| 22.61 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: LoupGarou's WizardCoder-Guanaco-15B-V1.1
## WizardCoder-Guanaco-15B-V1.1 Model Card
The WizardCoder-Guanaco-15B-V1.1 is a language model that combines the strengths of the [WizardCoder](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) base model and the [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset for finetuning. The openassistant-guanaco dataset was further trimmed to within 2 standard deviations of token size for input and output pairs and all non-english data has been removed to reduce training size requirements.
Version 1.1 showcases notable enhancements, employing a modified version of the previous openassistant-guanaco dataset. This dataset underwent a comprehensive revision, replacing every single answer with those generated by GPT-4.
The volume of the datasets has also been augmented by approximately 50%, with a particular focus on high school and abstract algebra. This expansion leveraged the combined capabilities of GPT-4 and GPT-3.5-Turbo. The initial evaluation of algebraic functions over 12 epochs indicated promising results from this enriched dataset. However, this is just the beginning; further refinements are in the pipeline, aiming to optimize the dataset quality and subsequently decrease the number of epochs required to achieve comparable results.
Considering the need to curtail memory consumption during training, this dataset was tailored to consist solely of English language questions and answers. Consequently, the model's performance in language translation may not be up to par. Nevertheless, the focus remains on enhancing the model's proficiency and efficiency within its defined scope.
# Intended Use
This model is designed to be used for a wide array of text generation tasks that require understanding and generating English text. The model is expected to perform well in tasks such as answering questions, writing essays, summarizing text, translation, and more. However, given the specific data processing and finetuning done, it might be particularly effective for tasks related to English language question-answering systems.
# Limitations
Despite the powerful capabilities of this model, users should be aware of its limitations. The model's knowledge is up to date only until the time it was trained, and it doesn't know about events in the world after that. It can sometimes produce incorrect or nonsensical responses, as it doesn't understand the text in the same way humans do. It should be used as a tool to assist in generating text and not as a sole source of truth.
# How to use
Here is an example of how to use this model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import time
import torch
class Chatbot:
def __init__(self, model_name):
self.tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
self.model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, torch_dtype=torch.bfloat16)
if self.tokenizer.pad_token_id is None:
self.tokenizer.pad_token_id = self.tokenizer.eos_token_id
def get_response(self, prompt):
inputs = self.tokenizer.encode_plus(prompt, return_tensors="pt", padding='max_length', max_length=100)
if next(self.model.parameters()).is_cuda:
inputs = {name: tensor.to('cuda') for name, tensor in inputs.items()}
start_time = time.time()
tokens = self.model.generate(input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
pad_token_id=self.tokenizer.pad_token_id,
max_new_tokens=400)
end_time = time.time()
output_tokens = tokens[0][inputs['input_ids'].shape[-1]:]
output = self.tokenizer.decode(output_tokens, skip_special_tokens=True)
time_taken = end_time - start_time
return output, time_taken
def main():
chatbot = Chatbot("LoupGarou/WizardCoder-Guanaco-15B-V1.1")
while True:
user_input = input("Enter your prompt: ")
if user_input.lower() == 'quit':
break
output, time_taken = chatbot.get_response(user_input)
print("\033[33m" + output + "\033[0m")
print("Time taken to process: ", time_taken, "seconds")
print("Exited the program.")
if __name__ == "__main__":
main()
```
# Training Procedure
The WizardCoder model, serving as the base, was fine-tuned on a modified version of the openassistant-guanaco dataset. This dataset underwent a significant revision, replacing every single answer with responses generated by the AI model GPT-4. It was then expanded by approximately 50%, emphasizing high school and abstract algebra-related questions, using a mix of GPT-4 and GPT-3.5-Turbo for answer generation.
The selected dataset was standardized to fall within two standard deviations of token size for the question sets, ensuring consistency in data handling. The order of the questions was also randomized to mitigate any potential biases during the training phase.
In the interest of optimizing memory usage during the training process, the dataset was streamlined to only include English language content. As a result, all non-English data was systematically expunged from this fine-tuning dataset. It's worth noting that this modification limits the model's performance in language translation tasks, but it significantly boosts its efficiency and effectiveness when dealing with English language questions and answers.
## Acknowledgements
This model, WizardCoder-Guanaco-15B-V1.1, is simply building on the efforts of two great teams to evaluate the performance of a combined model with the strengths of the [WizardCoder base model](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) and the [openassistant-guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.
Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
|
Soressaa/name_entity | Soressaa | 2023-07-15T12:54:58Z | 0 | 0 | transformers | [
"transformers",
"token-classification",
"om",
"license:artistic-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-15T12:30:40Z | ---
license: artistic-2.0
language:
- om
pipeline_tag: token-classification
library_name: transformers
--- |
TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3-GGML | TheBloke | 2023-07-15T12:48:51Z | 0 | 4 | transformers | [
"transformers",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"region:us"
] | null | 2023-07-15T12:41:42Z | ---
datasets:
- OpenAssistant/oasst1
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_type: falcon
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# H2O's GM OASST1 Falcon 7B v3 GGML
These files are GGML format model files for [H2O's GM OASST1 Falcon 7B v3](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3).
These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon.
These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3)
## Prompt template: H2O
```
<|prompt|>{prompt}<|endoftext|><|answer|>
```
<!-- compatibility_ggml start -->
## Compatibility
To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps:
```
git clone https://github.com/cmp-nct/ggllm.cpp
cd ggllm.cpp
rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release
```
Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.'
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
```
bin/falcon_main -t 8 -ngl 100 -b 1 -m h2ogpt-gm-oasst1-en-2048-falcon-7b-v3.ggccv1.q4_0.bin -enc -p "write a story about llamas"
```
Parameter `-enc` should automatically use the right prompt template for the model, so you can just enter your desired prompt.
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have.
`-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter.
Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| h2ogpt-gm-oasst1-en-2048-falcon-7b-v3.ggccv1.q4_0.bin | q4_0 | 4 | 4.06 GB| 6.56 GB | Original quant method, 4-bit. |
| h2ogpt-gm-oasst1-en-2048-falcon-7b-v3.ggccv1.q4_1.bin | q4_1 | 4 | 4.51 GB| 7.01 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| h2ogpt-gm-oasst1-en-2048-falcon-7b-v3.ggccv1.q5_0.bin | q5_0 | 5 | 4.96 GB| 7.46 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| h2ogpt-gm-oasst1-en-2048-falcon-7b-v3.ggccv1.q5_1.bin | q5_1 | 5 | 5.41 GB| 7.91 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| h2ogpt-gm-oasst1-en-2048-falcon-7b-v3.ggccv1.q8_0.bin | q8_0 | 8 | 7.67 GB| 10.17 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: H2O's GM OASST1 Falcon 7B v3
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28) personalized
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate`, `torch` and `einops` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
pip install einops==0.6.1
```
```python
import torch
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
use_fast=False,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3",
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
RWForCausalLM(
(transformer): RWModel(
(word_embeddings): Embedding(65024, 4544)
(h): ModuleList(
(0-31): 32 x DecoderLayer(
(input_layernorm): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
(self_attention): Attention(
(maybe_rotary): RotaryEmbedding()
(query_key_value): Linear(in_features=4544, out_features=4672, bias=False)
(dense): Linear(in_features=4544, out_features=4544, bias=False)
(attention_dropout): Dropout(p=0.0, inplace=False)
)
(mlp): MLP(
(dense_h_to_4h): Linear(in_features=4544, out_features=18176, bias=False)
(act): GELU(approximate='none')
(dense_4h_to_h): Linear(in_features=18176, out_features=4544, bias=False)
)
)
)
(ln_f): LayerNorm((4544,), eps=1e-05, elementwise_affine=True)
)
(lm_head): Linear(in_features=4544, out_features=65024, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
jovi848/autotrain-my_pref_on_products-74794139724 | jovi848 | 2023-07-15T12:28:35Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:jovi848/autotrain-data-my_pref_on_products",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-15T12:14:23Z | ---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- jovi848/autotrain-data-my_pref_on_products
co2_eq_emissions:
emissions: 7.027310340876168
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 74794139724
- CO2 Emissions (in grams): 7.0273
## Validation Metrics
- Loss: 0.000
- SacreBLEU: 0.000
- Gen len: 2.667 |
casque/tattoo_lora01 | casque | 2023-07-15T12:24:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T12:22:10Z | ---
license: creativeml-openrail-m
---
|
EdenSw/Vid2Sum | EdenSw | 2023-07-15T12:23:50Z | 0 | 3 | null | [
"summarization",
"arxiv:1910.09700",
"region:us"
] | summarization | 2023-07-15T09:12:55Z | ---
pipeline_tag: summarization
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
## Model Description
This model is designed to convert videos into textual summaries. It utilizes a combination of models from different libraries to perform the video-to-text conversion.
### Libraries and Models Used
library_name: transformers
- Library 1: OpenAI
- Model Name: whisper-large-v2
- Model URL: [OpenAI Whisper Large v2](https://api-inference.huggingface.co/models/openai/whisper-large-v2)
- Library 2: Facebook
- Model Name: bart-large-cnn
- Model URL: [Facebook BART Large CNN](https://api-inference.huggingface.co/models/facebook/bart-large-cnn)
Please note that this model is built using a combination of state-of-the-art models from different libraries, and it offers enhanced performance for video summarization tasks.
### Usage
To use the API endpoint for this model, you can make a POST request to the following URL:
## Model Details
- Name: Vid2Sum
- Pipeline Type: video-transcription
- Architecture: Transformer
- Description: This model generates summary text based on a video input.
- License: Apache-2.0
- Language: English
- Tags: text-generation, transformer, creative-writing
- **Developed by:** Eden
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Video-to-Text Conversion
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/EdenSw/Vid2Sum/tree/main
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Officialletai/ppo-LunarLander-v2 | Officialletai | 2023-07-15T12:18:48Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-01T17:41:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.87 +/- 17.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mandy234/myQAmodel | Mandy234 | 2023-07-15T12:03:59Z | 131 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-01T17:10:20Z | ---
license: apache-2.0
---
Hey !
Well, you can use this model for the Q&A model- given a context and question corresponding from the context, this model can answer the question.
Beware that this model can only answer closed-end questions. If you have given context based on the history of the USA from a Wikipedia page and asked the question
'Why the sky is Blue ?', chances will be that you wouldn't get the answer you expected.
The exceptional thing is I fine-tuned this model with only 50 data points (out of 10000) due to RAM issues. Even though with such a small dataset, this
is working exceptionally well with 98% accuracy!
# Context
You can input up to 30k words (That's equivalent to 100-page book content). But, it takes time to pre-process (expected!).
# Question
Ask the question which can be answered from the context only. This field can take upto 1000 words as input.
# Answer
This section provides you the answer from the above question. Note: Sometimes you may not get the right answer!
# Score
This number lies between 0 and 1, 1 being best possible answer, 0 being least possible answer.
I used BERT model for this task. I've explored different methods and finally arrived at this model. This is pretty good one!
PS: I'm working on solving Q&A for physics problems. If you are a big physics buff, please reach out to me to make this possible! (mail:[email protected]) |
mrizalf7/xlm-r-qa-squad1.1-squad2.0-tf-1 | mrizalf7 | 2023-07-15T11:56:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-15T11:52:32Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-qa-squad1.1-squad2.0-tf-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-qa-squad1.1-squad2.0-tf-1
This model is a fine-tuned version of [mrizalf7/xlm-r-qa-squad-2.0](https://huggingface.co/mrizalf7/xlm-r-qa-squad-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 3.1936 |
| No log | 2.0 | 14 | 3.2455 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Dinesh-2004/my-pet-dog | Dinesh-2004 | 2023-07-15T11:48:14Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T11:44:32Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Dinesh-2004 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: OPJU107
Sample pictures of this concept:
|
casque/badbrounderwear | casque | 2023-07-15T11:29:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T11:28:50Z | ---
license: creativeml-openrail-m
---
|
Xmm/autotrain-led-large-16384-cnn_dailymail-12600-74781139721 | Xmm | 2023-07-15T11:24:28Z | 99 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"led",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:Xmm/autotrain-data-led-large-16384-cnn_dailymail-12600",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-07-15T11:07:43Z | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- Xmm/autotrain-data-led-large-16384-cnn_dailymail-12600
co2_eq_emissions:
emissions: 9.040750193743245
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 74781139721
- CO2 Emissions (in grams): 9.0408
## Validation Metrics
- Loss: 0.849
- Rouge1: 58.689
- Rouge2: 36.397
- RougeL: 41.690
- RougeLsum: 55.965
- Gen Len: 118.061
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Xmm/autotrain-led-large-16384-cnn_dailymail-12600-74781139721
``` |
casque/baebronegligee | casque | 2023-07-15T11:22:57Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T11:22:02Z | ---
license: creativeml-openrail-m
---
|
CarlosMN/CartPole | CarlosMN | 2023-07-15T11:10:02Z | 0 | 1 | null | [
"reinforcement-learning",
"en",
"arxiv:2112.04213",
"region:us"
] | reinforcement-learning | 2023-07-15T10:23:37Z | ---
language:
- en
pipeline_tag: reinforcement-learning
---
# Cartpole Reinforcement Learning
This repository is a project focused on exploring reinforcement learning techniques using the OpenAI Gym environment. The objective is to compare different algorithms and approaches to improve the performance of an agent in the Cartpole task.
## Installation
Installation of packages
```
pip install -r requirements.txt
```
If you want to execute the the training phase and get your own model execute the main program, the hyperparameters and different options can be changes via config.ini file.
If you just want to watch the trained model play the game execute the following
```
python3 watchModel.py
```
## Objectives
The main objectives of this project are as follows:
1. Develop a working model that demonstrates an increase in survival time through training.
2. Experiment with different reinforcement learning algorithms and compare their training time, complexity, and achieved scores.
3. Fine-tune the algorithm parameters and the number of bins used to achieve optimal training results.
4. Improve the consistency of the trained agent's strategy.
5. Implement experience replay to enhance learning.
## Results
The initial approach used in this project was Q-Learning, and it produced the following results:

The convergence plot shows an increase in the score over time, with three distinct phases. The first phase corresponds to random inputs, followed by a phase where the model explores a lot. The third phase occurs when the epsilon value starts to decay.

Comparing the results of the trained agent (after 20,000 episodes) with a random agent clearly demonstrates the improvement achieved:

Despite the improvements, the trained agent still lacks consistency. This inconsistency is believed to be due to the inherent randomness in the Cartpole environment.
## Experience Replay
Experience replay has been implemented in this project, leading to significant improvements in the agent's performance. The details and results of this implementation are yet to be provided.
The results of the trained agent with experience replay are as follows:
It should be mention that to speed up the training phase, the experience replay agent had a score limit of 2000.
| Metric | Old Agent | Trained Agent with Experience Replay |
|------------------------|--------------|--------------------------------------|
| Convergence Plot |  |  |
| Score Histogram |  |  |
|Boxplot|| |
As observed by adding experience replay the agent has been able to objectively increase it's score.
## References
- https://arxiv.org/pdf/2112.04213.pdf
- https://aleksandarhaber.com/q-learning-in-python-with-tests-in-cart-pole-openai-gym-environment-reinforcement-learning-tutorial/ |
Daniil-plotnikov/russian-vision-v5-beta-3-1 | Daniil-plotnikov | 2023-07-15T10:50:20Z | 40 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T10:44:57Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Russian-Vision-v5-beta-3-1 Dreambooth model trained by Daniil-plotnikov with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jfif)

|
Anjyee/asep | Anjyee | 2023-07-15T10:09:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T10:04:28Z | ---
license: creativeml-openrail-m
---
|
TootToot/q-FrozenLake-v1-4x4-noSlippery | TootToot | 2023-07-15T09:53:40Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-18T14:06:52Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="TootToot/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Kooooofe/LandingLunar | Kooooofe | 2023-07-15T09:50:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T09:50:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -43.18 +/- 195.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-real | hafidikhsan | 2023-07-15T09:42:41Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-15T09:39:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-real
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dt-real
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2577
- Accuracy: 0.6578
- F1: 0.6488
- Precision: 0.6432
- Recall: 0.6578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.9224 | 1.0 | 310 | 0.8380 | 0.6142 | 0.5589 | 0.6070 | 0.6142 |
| 0.6168 | 2.0 | 620 | 0.7955 | 0.6651 | 0.6313 | 0.6369 | 0.6651 |
| 0.4687 | 3.0 | 930 | 1.0592 | 0.6150 | 0.6041 | 0.6434 | 0.6150 |
| 0.4495 | 4.0 | 1240 | 1.1980 | 0.6707 | 0.6592 | 0.6547 | 0.6707 |
| 0.182 | 5.0 | 1550 | 1.4150 | 0.6683 | 0.6596 | 0.6566 | 0.6683 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
naimul011/fine_tuned_llama-7b-hf_20 | naimul011 | 2023-07-15T09:37:01Z | 6 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-15T09:35:44Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Daniil-plotnikov/russian-vision-v5-beta-3 | Daniil-plotnikov | 2023-07-15T09:29:26Z | 60 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"ru",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T09:24:04Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
language:
- ru
- en
---
### Russian-Vision-V5-beta-3 Dreambooth model trained by Daniil-plotnikov with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept: |
marianna13/byt5-small-NSFW-image-urls | marianna13 | 2023-07-15T09:02:32Z | 128 | 3 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:laion/laion2B-en-joined",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-15T08:49:40Z | ---
datasets:
- laion/laion2B-en-joined
---
## Inference
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/byt5-small")
model = T5ForConditionalGeneration.from_pretrained("marianna13/byt5-small-NSFW-image-urls")
def get_label(text):
input_ids = tokenizer(text, return_tensors="pt", padding=True).input_ids
outputs = model.generate(input_ids)
label = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return label
``` |
NasimB/guten-log-rarity-all-no-cut | NasimB | 2023-07-15T08:55:19Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T07:03:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-log-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-log-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7036 | 0.29 | 500 | 5.6327 |
| 5.3408 | 0.58 | 1000 | 5.2075 |
| 4.9933 | 0.87 | 1500 | 4.9530 |
| 4.7107 | 1.16 | 2000 | 4.7988 |
| 4.5567 | 1.46 | 2500 | 4.6874 |
| 4.452 | 1.75 | 3000 | 4.5707 |
| 4.3309 | 2.04 | 3500 | 4.4934 |
| 4.1223 | 2.33 | 4000 | 4.4512 |
| 4.0982 | 2.62 | 4500 | 4.3907 |
| 4.0684 | 2.91 | 5000 | 4.3428 |
| 3.8697 | 3.2 | 5500 | 4.3302 |
| 3.8014 | 3.49 | 6000 | 4.3025 |
| 3.7776 | 3.79 | 6500 | 4.2679 |
| 3.6962 | 4.08 | 7000 | 4.2638 |
| 3.5138 | 4.37 | 7500 | 4.2596 |
| 3.5066 | 4.66 | 8000 | 4.2463 |
| 3.4966 | 4.95 | 8500 | 4.2334 |
| 3.3506 | 5.24 | 9000 | 4.2465 |
| 3.3204 | 5.53 | 9500 | 4.2435 |
| 3.3138 | 5.82 | 10000 | 4.2428 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
BaleChen/test-ppo-Huggy | BaleChen | 2023-07-15T08:32:36Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-15T08:32:27Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: BaleChen/test-ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Milanesa16/Anri | Milanesa16 | 2023-07-15T08:19:22Z | 0 | 0 | null | [
"rvc",
"rvcv2",
"anri",
"citypop",
"ja",
"license:openrail",
"region:us"
] | null | 2023-07-15T08:07:34Z | ---
license: openrail
language:
- ja
tags:
- rvc
- rvcv2
- anri
- citypop
--- |
wesleyacheng/dog-breeds-multiclass-image-classification-with-vit | wesleyacheng | 2023-07-15T08:15:45Z | 993 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"arxiv:2010.11929",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-09T18:38:41Z | ---
license: mit
metrics:
- accuracy
- f1
pipeline_tag: image-classification
widget:
- src: https://upload.wikimedia.org/wikipedia/commons/thumb/f/fb/Welchcorgipembroke.JPG/1200px-Welchcorgipembroke.JPG
example_title: Pembroke Corgi
- src: https://upload.wikimedia.org/wikipedia/commons/d/df/Shihtzu_%28cropped%29.jpg
example_title: Shih Tzu
- src: https://upload.wikimedia.org/wikipedia/commons/5/55/Beagle_600.jpg
example_title: Beagle
---
Model made by notebook first posted in my [Kaggle](https://www.kaggle.com/wesleyacheng/dog-breeds-multiclass-image-classification-w-vit).
# Model Motivation
Recently, someone asked me if you can classify dog images into their respective dog breeds instead just differentiating from cats vs dogs like my last [notebook](https://www.kaggle.com/code/wesleyacheng/cat-vs-dog-image-classification-with-cnns). I say **YES**!
Due to the complexity of the problem, we will be using the most advanced computer vision architecture released in the [2020 Google paper](https://arxiv.org/pdf/2010.11929v2.pdf), the [**Vision Transformer**](https://paperswithcode.com/methods/category/vision-transformer).
The difference between the **Vision Transformer** and the traditional **Convolutional Neural Network (CNN)** is how it treats an image. In **Vision Transformers**, we take the input as a patch of the original image, say 16 x 16, and feed in into the Transformer as a sequence with positional embeddings and self-attention, while in the **Convolutional Neural Network (CNN)**, we use the same patch of original image as an input, but use convolutions and pooling layers as inductive biases. What this means is that **Vision Transformer** can use it's judgement to attend any particular patch of the image in a *global* fashion using it's self-attention mechanism without having us to guide the neural network like a **CNN** with *local* centering/cropping/bounding box our images to help its convolutions.
This allows the **Vision Transformer** architecture to be more flexible and scalable in nature, allowing us to create [foundation models](https://blogs.nvidia.com/blog/2023/03/13/what-are-foundation-models) in computer vision, similar to the NLP foundational models like [BERT](https://paperswithcode.com/method/bert) and [GPT](https://paperswithcode.com/method/gpt), with pre-training self-supervised/supervised on massive amount of image data that would generalize to different computer vision tasks such as *image classification, recognition, segmentation, etc.* This cross-pollination helps us move closer towards the goal of Artificial General Intelligence.
One thing about **Vision Transformers** are it has weaker inductive biases compared to **Convolutional Neural Networks** that enables it's scalability and flexibility. This feature/bug depending on who you ask will require most well-performing pre-trained models to require more data despite having less parameters compared to it's CNN counterparts.
Luckily, in this model, we will use a **Vision Transformer** from [Google hosted at HuggingFace](https://huggingface.co/google/vit-base-patch16-224-in21k) pre-trained on the [ImageNet-21k dataset](https://paperswithcode.com/paper/imagenet-21k-pretraining-for-the-masses) (14 million images, 21k classes) with 16x16 patches, 224x224 resolution to bypass that data limitation. We will be fine-tuning this model to our "small" dog breeds dataset of around 20 thousand images from the [Stanford Dogs dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) imported by Jessica Li into [Kaggle](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset) to classify dog images into 120 types of dog breeds!
# Model Description
This model is finetuned using the [Google Vision Transformer (vit-base-patch16-224-in21k)](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [Stanford Dogs dataset in Kaggle](https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset) to classify dog images into 120 types of dog breeds.
# Intended Uses & Limitations
You can use this finetuned model to classify images of dogs only and dog breeds that are in the dataset.
# How to Use
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
import PIL
import requests
url = "https://upload.wikimedia.org/wikipedia/commons/5/55/Beagle_600.jpg"
image = PIL.Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("wesleyacheng/dog-breeds-multiclass-image-classification-with-vit")
model = AutoModelForImageClassification.from_pretrained("wesleyacheng/dog-breeds-multiclass-image-classification-with-vit")
inputs = image_processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 120 Stanford dog breeds classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
# Model Training Metrics
| Epoch | Top-1 Accuracy | Top-3 Accuracy | Top-5 Accuracy | Macro F1 |
|-------|----------------|-----------------|----------------|----------|
| 1 | 79.8% | 95.1% | 97.5% | 77.2% |
| 2 | 83.8% | 96.7% | 98.2% | 81.9% |
| 3 | 84.8% | 96.7% | 98.3% | 83.4% |
# Model Evaluation Metrics
| Top-1 Accuracy | Top-3 Accuracy | Top-5 Accuracy | Macro F1 |
|----------------|-----------------|----------------|----------|
| 84.0% | 97.1% | 98.7% | 83.0% | |
datajanko/ppo-Huggy | datajanko | 2023-07-15T08:15:39Z | 13 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-15T08:15:24Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: datajanko/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YojitShinde/Reinforce-PixelCopter-v0 | YojitShinde | 2023-07-15T08:05:02Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T08:04:57Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 34.70 +/- 38.34
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Raelina/Maya_Ikusaba | Raelina | 2023-07-15T07:35:53Z | 2 | 1 | diffusers | [
"diffusers",
"en",
"region:us"
] | null | 2023-07-15T07:12:04Z | ---
language:
- en
metrics:
- character
library_name: diffusers
---
This LoRa trained with 40+ images taken from anime.
Model used to train is AnimeFullFinalPruned aka NAI, so it work with any anime style model.
Recommended weight 0.7-0.8
Prompt positive and negative refer to CivitAi https://civitai.com/models/109201/maya-ikusaba-or-my-one-hit-kill-sister
Also i recommend use Adetailer! to fix faces and eyes, some of my example images using Adetailer! |
lilBuffaloEric/autoaudit_20230714_attempt2 | lilBuffaloEric | 2023-07-15T07:17:32Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-15T07:02:08Z | This model is a finetuned version run by the finetune.py in github repository tolen/alpaca-lora with the following parameters, notice that the training dataset can be found in repository:https://github.com/ddzipp/AutoAudit_LLM_Dataset
# model/data params
base_model: str = "yahma/llama-7b-hf",
data_path: str = "", # dataset see repository https://github.com/ddzipp/AutoAudit_LLM_Dataset/tree/v0.0.1
output_dir: str = "./autoaudit_20230703_attempt2",
# training hyperparams
batch_size: int = 4,
micro_batch_size: int = 1,
num_epochs: int = 28,
learning_rate: float = 3e-4,
cutoff_len: int = 512,
val_set_size: int = 400,
# lora hyperparams
lora_r: int = 16,
lora_alpha: int = 16,
lora_dropout: float = 0.05,
lora_target_modules: List[str] = [
"q_proj",
"k_proj",
"v_proj",
"o_proj"
],
# llm hyperparams
train_on_inputs: bool = True, # if False, masks out inputs in loss
add_eos_token: bool = False,
group_by_length: bool = False, # faster, but produces an odd training loss curve |
DouglasPontes/11jul-filtered_tweets | DouglasPontes | 2023-07-15T07:16:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-11T23:28:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: 11jul-filtered_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 11jul-filtered_tweets
This model is a fine-tuned version of [./model_tweets_2019](https://huggingface.co/./model_tweets_2019) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5545
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.25 | 8000 | 2.9512 |
| 3.1757 | 0.5 | 16000 | 2.8425 |
| 3.1757 | 0.75 | 24000 | 2.7666 |
| 2.924 | 1.0 | 32000 | 2.7425 |
| 2.924 | 1.25 | 40000 | 2.7106 |
| 2.8592 | 1.5 | 48000 | 2.6844 |
| 2.8592 | 1.75 | 56000 | 2.7093 |
| 2.83 | 2.0 | 64000 | 2.6717 |
| 2.83 | 2.25 | 72000 | 2.6630 |
| 2.8028 | 2.5 | 80000 | 2.6702 |
| 2.8028 | 2.75 | 88000 | 2.6711 |
| 2.7946 | 3.0 | 96000 | 2.6588 |
| 2.7946 | 3.26 | 104000 | 2.6297 |
| 2.7786 | 3.51 | 112000 | 2.6587 |
| 2.7786 | 3.76 | 120000 | 2.6469 |
| 2.7714 | 4.01 | 128000 | 2.6386 |
| 2.7714 | 4.26 | 136000 | 2.6694 |
| 2.7638 | 4.51 | 144000 | 2.6297 |
| 2.7638 | 4.76 | 152000 | 2.6502 |
| 2.7644 | 5.01 | 160000 | 2.6578 |
| 2.7644 | 5.26 | 168000 | 2.6088 |
| 2.7531 | 5.51 | 176000 | 2.6317 |
| 2.7531 | 5.76 | 184000 | 2.6299 |
| 2.759 | 6.01 | 192000 | 2.6422 |
| 2.759 | 6.26 | 200000 | 2.6136 |
| 2.7496 | 6.51 | 208000 | 2.6363 |
| 2.7496 | 6.76 | 216000 | 2.6274 |
| 2.7529 | 7.01 | 224000 | 2.6160 |
| 2.7529 | 7.26 | 232000 | 2.6298 |
| 2.7456 | 7.51 | 240000 | 2.6208 |
| 2.7456 | 7.76 | 248000 | 2.6095 |
| 2.7404 | 8.01 | 256000 | 2.6069 |
| 2.7404 | 8.26 | 264000 | 2.5831 |
| 2.7355 | 8.51 | 272000 | 2.6136 |
| 2.7355 | 8.76 | 280000 | 2.6075 |
| 2.7345 | 9.01 | 288000 | 2.5976 |
| 2.7345 | 9.26 | 296000 | 2.6100 |
| 2.7324 | 9.52 | 304000 | 2.6142 |
| 2.7324 | 9.77 | 312000 | 2.6072 |
| 2.7308 | 10.02 | 320000 | 2.6105 |
| 2.7308 | 10.27 | 328000 | 2.6167 |
| 2.7272 | 10.52 | 336000 | 2.5891 |
| 2.7272 | 10.77 | 344000 | 2.6084 |
| 2.736 | 11.02 | 352000 | 2.6131 |
| 2.736 | 11.27 | 360000 | 2.5864 |
| 2.7318 | 11.52 | 368000 | 2.6184 |
| 2.7318 | 11.77 | 376000 | 2.5971 |
| 2.7345 | 12.02 | 384000 | 2.5649 |
| 2.7345 | 12.27 | 392000 | 2.5936 |
| 2.7249 | 12.52 | 400000 | 2.6101 |
| 2.7249 | 12.77 | 408000 | 2.5798 |
| 2.7351 | 13.02 | 416000 | 2.5903 |
| 2.7351 | 13.27 | 424000 | 2.6260 |
| 2.7255 | 13.52 | 432000 | 2.6194 |
| 2.7255 | 13.77 | 440000 | 2.6140 |
| 2.7249 | 14.02 | 448000 | 2.5795 |
| 2.7249 | 14.27 | 456000 | 2.5961 |
| 2.7217 | 14.52 | 464000 | 2.5972 |
| 2.7217 | 14.77 | 472000 | 2.5877 |
| 2.7246 | 15.02 | 480000 | 2.5760 |
| 2.7246 | 15.27 | 488000 | 2.5926 |
| 2.7223 | 15.52 | 496000 | 2.6017 |
| 2.7223 | 15.78 | 504000 | 2.5969 |
| 2.7261 | 16.03 | 512000 | 2.5919 |
| 2.7261 | 16.28 | 520000 | 2.6044 |
| 2.7237 | 16.53 | 528000 | 2.6042 |
| 2.7237 | 16.78 | 536000 | 2.5812 |
| 2.7234 | 17.03 | 544000 | 2.5887 |
| 2.7234 | 17.28 | 552000 | 2.6220 |
| 2.7249 | 17.53 | 560000 | 2.5997 |
| 2.7249 | 17.78 | 568000 | 2.5860 |
| 2.7237 | 18.03 | 576000 | 2.6072 |
| 2.7237 | 18.28 | 584000 | 2.5817 |
| 2.7246 | 18.53 | 592000 | 2.5931 |
| 2.7246 | 18.78 | 600000 | 2.5815 |
| 2.7233 | 19.03 | 608000 | 2.5723 |
| 2.7233 | 19.28 | 616000 | 2.5887 |
| 2.7142 | 19.53 | 624000 | 2.5862 |
| 2.7142 | 19.78 | 632000 | 2.6196 |
| 2.7169 | 20.03 | 640000 | 2.5906 |
| 2.7169 | 20.28 | 648000 | 2.5897 |
| 2.7189 | 20.53 | 656000 | 2.6032 |
| 2.7189 | 20.78 | 664000 | 2.6001 |
| 2.7193 | 21.03 | 672000 | 2.5951 |
| 2.7193 | 21.28 | 680000 | 2.5839 |
| 2.7176 | 21.53 | 688000 | 2.5792 |
| 2.7176 | 21.78 | 696000 | 2.5972 |
| 2.7206 | 22.04 | 704000 | 2.5993 |
| 2.7206 | 22.29 | 712000 | 2.5966 |
| 2.7118 | 22.54 | 720000 | 2.5655 |
| 2.7118 | 22.79 | 728000 | 2.5832 |
| 2.7185 | 23.04 | 736000 | 2.5695 |
| 2.7185 | 23.29 | 744000 | 2.5662 |
| 2.7162 | 23.54 | 752000 | 2.5866 |
| 2.7162 | 23.79 | 760000 | 2.5729 |
| 2.714 | 24.04 | 768000 | 2.5876 |
| 2.714 | 24.29 | 776000 | 2.5816 |
| 2.7071 | 24.54 | 784000 | 2.6026 |
| 2.7071 | 24.79 | 792000 | 2.5834 |
| 2.7154 | 25.04 | 800000 | 2.5872 |
| 2.7154 | 25.29 | 808000 | 2.5804 |
| 2.7121 | 25.54 | 816000 | 2.6039 |
| 2.7121 | 25.79 | 824000 | 2.5803 |
| 2.7061 | 26.04 | 832000 | 2.5850 |
| 2.7061 | 26.29 | 840000 | 2.5766 |
| 2.7065 | 26.54 | 848000 | 2.5914 |
| 2.7065 | 26.79 | 856000 | 2.5810 |
| 2.7149 | 27.04 | 864000 | 2.5884 |
| 2.7149 | 27.29 | 872000 | 2.5676 |
| 2.7076 | 27.54 | 880000 | 2.5884 |
| 2.7076 | 27.79 | 888000 | 2.5598 |
| 2.7092 | 28.04 | 896000 | 2.5741 |
| 2.7092 | 28.3 | 904000 | 2.5898 |
| 2.7033 | 28.55 | 912000 | 2.5956 |
| 2.7033 | 28.8 | 920000 | 2.5792 |
| 2.7075 | 29.05 | 928000 | 2.5805 |
| 2.7075 | 29.3 | 936000 | 2.5790 |
| 2.7011 | 29.55 | 944000 | 2.5754 |
| 2.7011 | 29.8 | 952000 | 2.5768 |
| 2.7053 | 30.05 | 960000 | 2.5722 |
| 2.7053 | 30.3 | 968000 | 2.5683 |
| 2.6988 | 30.55 | 976000 | 2.5965 |
| 2.6988 | 30.8 | 984000 | 2.6246 |
| 2.7089 | 31.05 | 992000 | 2.6169 |
| 2.7089 | 31.3 | 1000000 | 2.5910 |
| 2.716 | 31.55 | 1008000 | 2.5767 |
| 2.716 | 31.8 | 1016000 | 2.5883 |
| 2.7103 | 32.05 | 1024000 | 2.5601 |
| 2.7103 | 32.3 | 1032000 | 2.5739 |
| 2.706 | 32.55 | 1040000 | 2.5870 |
| 2.706 | 32.8 | 1048000 | 2.5976 |
| 2.7088 | 33.05 | 1056000 | 2.5769 |
| 2.7088 | 33.3 | 1064000 | 2.5688 |
| 2.703 | 33.55 | 1072000 | 2.5604 |
| 2.703 | 33.8 | 1080000 | 2.5719 |
| 2.712 | 34.05 | 1088000 | 2.5797 |
| 2.712 | 34.3 | 1096000 | 2.5561 |
| 2.7008 | 34.56 | 1104000 | 2.5654 |
| 2.7008 | 34.81 | 1112000 | 2.5802 |
| 2.7052 | 35.06 | 1120000 | 2.5729 |
| 2.7052 | 35.31 | 1128000 | 2.5810 |
| 2.7031 | 35.56 | 1136000 | 2.5681 |
| 2.7031 | 35.81 | 1144000 | 2.5781 |
| 2.702 | 36.06 | 1152000 | 2.5811 |
| 2.702 | 36.31 | 1160000 | 2.5827 |
| 2.6986 | 36.56 | 1168000 | 2.5716 |
| 2.6986 | 36.81 | 1176000 | 2.5553 |
| 2.6985 | 37.06 | 1184000 | 2.5746 |
| 2.6985 | 37.31 | 1192000 | 2.5655 |
| 2.7042 | 37.56 | 1200000 | 2.5836 |
| 2.7042 | 37.81 | 1208000 | 2.5898 |
| 2.7093 | 38.06 | 1216000 | 2.5779 |
| 2.7093 | 38.31 | 1224000 | 2.5912 |
| 2.7108 | 38.56 | 1232000 | 2.5720 |
| 2.7108 | 38.81 | 1240000 | 2.5728 |
| 2.7023 | 39.06 | 1248000 | 2.5882 |
| 2.7023 | 39.31 | 1256000 | 2.5858 |
| 2.7025 | 39.56 | 1264000 | 2.5806 |
| 2.7025 | 39.81 | 1272000 | 2.5723 |
| 2.6993 | 40.06 | 1280000 | 2.5607 |
| 2.6993 | 40.31 | 1288000 | 2.5715 |
| 2.7042 | 40.56 | 1296000 | 2.5881 |
| 2.7042 | 40.82 | 1304000 | 2.5734 |
| 2.7006 | 41.07 | 1312000 | 2.5711 |
| 2.7006 | 41.32 | 1320000 | 2.5686 |
| 2.691 | 41.57 | 1328000 | 2.5655 |
| 2.691 | 41.82 | 1336000 | 2.5519 |
| 2.7039 | 42.07 | 1344000 | 2.5942 |
| 2.7039 | 42.32 | 1352000 | 2.5920 |
| 2.6979 | 42.57 | 1360000 | 2.5711 |
| 2.6979 | 42.82 | 1368000 | 2.5686 |
| 2.7027 | 43.07 | 1376000 | 2.5708 |
| 2.7027 | 43.32 | 1384000 | 2.5619 |
| 2.6982 | 43.57 | 1392000 | 2.5836 |
| 2.6982 | 43.82 | 1400000 | 2.5781 |
| 2.6998 | 44.07 | 1408000 | 2.5691 |
| 2.6998 | 44.32 | 1416000 | 2.5735 |
| 2.6933 | 44.57 | 1424000 | 2.5383 |
| 2.6933 | 44.82 | 1432000 | 2.5936 |
| 2.7026 | 45.07 | 1440000 | 2.5486 |
| 2.7026 | 45.32 | 1448000 | 2.5586 |
| 2.6928 | 45.57 | 1456000 | 2.5715 |
| 2.6928 | 45.82 | 1464000 | 2.5450 |
| 2.699 | 46.07 | 1472000 | 2.5726 |
| 2.699 | 46.32 | 1480000 | 2.5677 |
| 2.7 | 46.57 | 1488000 | 2.5615 |
| 2.7 | 46.82 | 1496000 | 2.5721 |
| 2.7014 | 47.08 | 1504000 | 2.5562 |
| 2.7014 | 47.33 | 1512000 | 2.5676 |
| 2.6924 | 47.58 | 1520000 | 2.5670 |
| 2.6924 | 47.83 | 1528000 | 2.5643 |
| 2.6918 | 48.08 | 1536000 | 2.5836 |
| 2.6918 | 48.33 | 1544000 | 2.5420 |
| 2.7047 | 48.58 | 1552000 | 2.5471 |
| 2.7047 | 48.83 | 1560000 | 2.5655 |
| 2.6875 | 49.08 | 1568000 | 2.5670 |
| 2.6875 | 49.33 | 1576000 | 2.5551 |
| 2.6966 | 49.58 | 1584000 | 2.5814 |
| 2.6966 | 49.83 | 1592000 | 2.5690 |
| 2.7004 | 50.08 | 1600000 | 2.5666 |
| 2.7004 | 50.33 | 1608000 | 2.5571 |
| 2.6978 | 50.58 | 1616000 | 2.5620 |
| 2.6978 | 50.83 | 1624000 | 2.5749 |
| 2.6921 | 51.08 | 1632000 | 2.5810 |
| 2.6921 | 51.33 | 1640000 | 2.5710 |
| 2.6958 | 51.58 | 1648000 | 2.5627 |
| 2.6958 | 51.83 | 1656000 | 2.5729 |
| 2.6909 | 52.08 | 1664000 | 2.5785 |
| 2.6909 | 52.33 | 1672000 | 2.5725 |
| 2.6906 | 52.58 | 1680000 | 2.5745 |
| 2.6906 | 52.83 | 1688000 | 2.5708 |
| 2.6886 | 53.08 | 1696000 | 2.5379 |
| 2.6886 | 53.34 | 1704000 | 2.5692 |
| 2.6931 | 53.59 | 1712000 | 2.5716 |
| 2.6931 | 53.84 | 1720000 | 2.5474 |
| 2.6976 | 54.09 | 1728000 | 2.5843 |
| 2.6976 | 54.34 | 1736000 | 2.5615 |
| 2.7044 | 54.59 | 1744000 | 2.5678 |
| 2.7044 | 54.84 | 1752000 | 2.5513 |
| 2.7076 | 55.09 | 1760000 | 2.5642 |
| 2.7076 | 55.34 | 1768000 | 2.5533 |
| 2.6957 | 55.59 | 1776000 | 2.5803 |
| 2.6957 | 55.84 | 1784000 | 2.5523 |
| 2.6911 | 56.09 | 1792000 | 2.5769 |
| 2.6911 | 56.34 | 1800000 | 2.5510 |
| 2.6949 | 56.59 | 1808000 | 2.5712 |
| 2.6949 | 56.84 | 1816000 | 2.5696 |
| 2.6935 | 57.09 | 1824000 | 2.5776 |
| 2.6935 | 57.34 | 1832000 | 2.5495 |
| 2.6924 | 57.59 | 1840000 | 2.5555 |
| 2.6924 | 57.84 | 1848000 | 2.5893 |
| 2.6992 | 58.09 | 1856000 | 2.5474 |
| 2.6992 | 58.34 | 1864000 | 2.5591 |
| 2.6938 | 58.59 | 1872000 | 2.5641 |
| 2.6938 | 58.84 | 1880000 | 2.5716 |
| 2.6918 | 59.09 | 1888000 | 2.5481 |
| 2.6918 | 59.34 | 1896000 | 2.5384 |
| 2.6927 | 59.59 | 1904000 | 2.5577 |
| 2.6927 | 59.85 | 1912000 | 2.5693 |
| 2.6874 | 60.1 | 1920000 | 2.5704 |
| 2.6874 | 60.35 | 1928000 | 2.5680 |
| 2.6915 | 60.6 | 1936000 | 2.5578 |
| 2.6915 | 60.85 | 1944000 | 2.5540 |
| 2.6942 | 61.1 | 1952000 | 2.5527 |
| 2.6942 | 61.35 | 1960000 | 2.5956 |
| 2.7001 | 61.6 | 1968000 | 2.5667 |
| 2.7001 | 61.85 | 1976000 | 2.5741 |
| 2.692 | 62.1 | 1984000 | 2.5789 |
| 2.692 | 62.35 | 1992000 | 2.5476 |
| 2.6954 | 62.6 | 2000000 | 2.5700 |
| 2.6954 | 62.85 | 2008000 | 2.5605 |
| 2.6891 | 63.1 | 2016000 | 2.5873 |
| 2.6891 | 63.35 | 2024000 | 2.5622 |
| 2.6903 | 63.6 | 2032000 | 2.5888 |
| 2.6903 | 63.85 | 2040000 | 2.5678 |
| 2.6911 | 64.1 | 2048000 | 2.5558 |
| 2.6911 | 64.35 | 2056000 | 2.5632 |
| 2.6921 | 64.6 | 2064000 | 2.5564 |
| 2.6921 | 64.85 | 2072000 | 2.5399 |
| 2.6842 | 65.1 | 2080000 | 2.5599 |
| 2.6842 | 65.35 | 2088000 | 2.5642 |
| 2.6961 | 65.6 | 2096000 | 2.5573 |
| 2.6961 | 65.85 | 2104000 | 2.5605 |
| 2.6897 | 66.11 | 2112000 | 2.5627 |
| 2.6897 | 66.36 | 2120000 | 2.5740 |
| 2.6929 | 66.61 | 2128000 | 2.5796 |
| 2.6929 | 66.86 | 2136000 | 2.5580 |
| 2.699 | 67.11 | 2144000 | 2.5589 |
| 2.699 | 67.36 | 2152000 | 2.5618 |
| 2.6923 | 67.61 | 2160000 | 2.5688 |
| 2.6923 | 67.86 | 2168000 | 2.5876 |
| 2.6971 | 68.11 | 2176000 | 2.5621 |
| 2.6971 | 68.36 | 2184000 | 2.5822 |
| 2.6879 | 68.61 | 2192000 | 2.5695 |
| 2.6879 | 68.86 | 2200000 | 2.5471 |
| 2.694 | 69.11 | 2208000 | 2.5567 |
| 2.694 | 69.36 | 2216000 | 2.5659 |
| 2.6924 | 69.61 | 2224000 | 2.5781 |
| 2.6924 | 69.86 | 2232000 | 2.5597 |
| 2.6904 | 70.11 | 2240000 | 2.5779 |
| 2.6904 | 70.36 | 2248000 | 2.5693 |
| 2.6975 | 70.61 | 2256000 | 2.5734 |
| 2.6975 | 70.86 | 2264000 | 2.5612 |
| 2.6945 | 71.11 | 2272000 | 2.5453 |
| 2.6945 | 71.36 | 2280000 | 2.5925 |
| 2.6922 | 71.61 | 2288000 | 2.5666 |
| 2.6922 | 71.86 | 2296000 | 2.5671 |
| 2.6906 | 72.11 | 2304000 | 2.5575 |
| 2.6906 | 72.37 | 2312000 | 2.5581 |
| 2.6993 | 72.62 | 2320000 | 2.5700 |
| 2.6993 | 72.87 | 2328000 | 2.5819 |
| 2.7053 | 73.12 | 2336000 | 2.5811 |
| 2.7053 | 73.37 | 2344000 | 2.5717 |
| 2.6952 | 73.62 | 2352000 | 2.5754 |
| 2.6952 | 73.87 | 2360000 | 2.5699 |
| 2.6936 | 74.12 | 2368000 | 2.5752 |
| 2.6936 | 74.37 | 2376000 | 2.5775 |
| 2.6962 | 74.62 | 2384000 | 2.5843 |
| 2.6962 | 74.87 | 2392000 | 2.5677 |
| 2.6894 | 75.12 | 2400000 | 2.5784 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
blackmount8/mpt-7b-instruct-ct2-int8_float16 | blackmount8 | 2023-07-15T06:52:02Z | 2 | 0 | transformers | [
"transformers",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-07-15T05:40:47Z | ---
inference: false
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
---
# blackmount8/mpt-7b-instruct-ct2-int8_float16
Int8_float16 version of [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct), quantized using CTranslate2.
## MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted in the dolly-15k format:
```python
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
example = "James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? Explain before answering."
fmt_ex = PROMPT_FOR_GENERATION_FORMAT.format(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-40GBs for about 2.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
NasimB/gpt2-concat-cbt-rarity-end-p5k | NasimB | 2023-07-15T06:25:45Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-15T04:30:51Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-rarity-end-p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-rarity-end-p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6981 | 0.29 | 500 | 5.6337 |
| 5.3423 | 0.58 | 1000 | 5.2046 |
| 4.9886 | 0.87 | 1500 | 4.9471 |
| 4.7073 | 1.17 | 2000 | 4.8060 |
| 4.5535 | 1.46 | 2500 | 4.6759 |
| 4.4474 | 1.75 | 3000 | 4.5672 |
| 4.336 | 2.04 | 3500 | 4.4881 |
| 4.1197 | 2.33 | 4000 | 4.4473 |
| 4.1025 | 2.62 | 4500 | 4.3897 |
| 4.0623 | 2.91 | 5000 | 4.3338 |
| 3.8634 | 3.21 | 5500 | 4.3240 |
| 3.7979 | 3.5 | 6000 | 4.2995 |
| 3.7821 | 3.79 | 6500 | 4.2652 |
| 3.6959 | 4.08 | 7000 | 4.2614 |
| 3.5107 | 4.37 | 7500 | 4.2535 |
| 3.5065 | 4.66 | 8000 | 4.2392 |
| 3.5013 | 4.95 | 8500 | 4.2262 |
| 3.3462 | 5.24 | 9000 | 4.2390 |
| 3.3225 | 5.54 | 9500 | 4.2385 |
| 3.3144 | 5.83 | 10000 | 4.2372 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
maidacundo/falcon_qlora_sql_r64 | maidacundo | 2023-07-15T06:20:58Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:spider",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-06-20T09:55:28Z | ---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
datasets:
- spider
model-index:
- name: falcon_qlora_sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_qlora_sql
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the spider dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 43.7
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2917 | 0.23 | 100 | 0.3571 |
| 0.8787 | 0.46 | 200 | 0.3685 |
| 0.3366 | 0.68 | 300 | 0.4001 |
| 0.2319 | 0.91 | 400 | 0.3134 |
| 0.3005 | 1.14 | 500 | 0.3567 |
| 0.4231 | 1.37 | 600 | 0.3052 |
| 0.2535 | 1.6 | 700 | 0.3218 |
| 0.2257 | 1.83 | 800 | 0.2744 |
| 0.0771 | 2.05 | 900 | 0.2567 |
| 0.0978 | 2.28 | 1000 | 0.2304 |
| 0.2031 | 2.51 | 1100 | 0.2236 |
| 0.2471 | 2.74 | 1200 | 0.2431 |
| 0.1514 | 2.97 | 1300 | 0.1897 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
coreml-community/coreml-8528-diffusion | coreml-community | 2023-07-15T06:08:15Z | 0 | 23 | null | [
"coreml",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-01-09T23:50:12Z | ---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
---
# Core ML Converted Model
This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br>
Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br>
`split_einsum` version is compatible with all compute unit options including Neural Engine.<br>
`original` version is only compatible with CPU & GPU option.
# 8528-diffusion final
Source: [Hugging Face](https://huggingface.co/852wa/8528-diffusion) (The release of the source model has ended.)
8528-diffusion is a latent text-to-image diffusion model, conditioned by fine-tuning to colorful character images.
8528 Diffusion is a fine-tuning model of Stable Diffusion v1.4 with AI output images (t2i and t2i with i2i).
I recommend entering "low quality,worst quality," for Negative prompt and Clip skip: 2.
<!--
<img src=https://i.imgur.com/vCn02tM.jpg >
!-->

((ultra-detailed)), ((illustration)), Silver hair, red eyes, beautiful eyes, dress, Queen,Anime style, pretty face, pretty eyes, pretty, girl,High resolution, beautiful girl,octane render, realistic, hyper detailed ray tracing, 8k,classic style,Rococo
Negative prompt: (low quality, worst quality:1.4) concept art
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 241379229, Size: 512x768, Model hash: 31cd036c, Clip skip: 2
# 8528-diffusion v0.4
<!--
<img src=https://i.imgur.com/X2zFoeA.jpg >
!-->

# 8528-diffusion v0.3
<!--
<img src=https://i.imgur.com/QQuNpYl.png >
<img src=https://i.imgur.com/u785LlC.png >
!-->


# 8528-diffusion v0.2
8528-diffusion is a latent text-to-image diffusion model, conditioned by fine-tuning to colorful character images.
8528 Diffusion v0.2 & v0.1 is a fine-tuning model of Waifu Diffusion with AI output images (t2i and t2i with i2i).
<!--
<img src=https://i.imgur.com/z4sFctp.png >
!-->

# 8528-diffusion v0.1
<!--
<img src=https://i.imgur.com/8chXeif.png >
!-->

[google colab](https://colab.research.google.com/drive/1ksRxO84CMbXrW_p-x5Vuz74AHnrWpe_u)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
<!-- Discord Server has been stopped.
## Discord
https://discord.gg/ax9KgpUMUP
!--> |
BaleChen/test_lunarlanderv2_mlp_ppo | BaleChen | 2023-07-15T06:06:40Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T06:06:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.71 +/- 12.68
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
akselozer9/akselo | akselozer9 | 2023-07-15T05:48:16Z | 0 | 0 | null | [
"token-classification",
"dataset:Open-Orca/OpenOrca",
"region:us"
] | token-classification | 2023-07-15T05:47:48Z | ---
datasets:
- Open-Orca/OpenOrca
metrics:
- accuracy
pipeline_tag: token-classification
--- |
chandan9t8/Reinforce_cartPole | chandan9t8 | 2023-07-15T05:36:53Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T04:26:29Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_cartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kelvinih/taser-cocondenser-wiki | kelvinih | 2023-07-15T05:33:04Z | 0 | 0 | null | [
"pytorch",
"license:mit",
"region:us"
] | null | 2023-07-15T05:26:30Z | ---
license: mit
---
# Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering
This repository includes the model for
[Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering](https://aclanthology.org/2023.acl-short.159/).
If you find this useful, please cite the following paper:
```
@inproceedings{cheng-etal-2023-task,
title = "Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering",
author = "Cheng, Hao and
Fang, Hao and
Liu, Xiaodong and
Gao, Jianfeng",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-short.159",
pages = "1864--1875",
}
```
|
kelvinih/taser-bert-base-uncased | kelvinih | 2023-07-15T05:29:51Z | 0 | 0 | null | [
"pytorch",
"license:mit",
"region:us"
] | null | 2023-07-15T05:27:05Z | ---
license: mit
---
# Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering
This repository includes the model for
[Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering](https://aclanthology.org/2023.acl-short.159/).
If you find this useful, please cite the following paper:
```
@inproceedings{cheng-etal-2023-task,
title = "Task-Aware Specialization for Efficient and Robust Dense Retrieval for Open-Domain Question Answering",
author = "Cheng, Hao and
Fang, Hao and
Liu, Xiaodong and
Gao, Jianfeng",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-short.159",
pages = "1864--1875",
}
```
|
goethe0101/GWP_Model | goethe0101 | 2023-07-15T04:46:28Z | 1 | 0 | peft | [
"peft",
"pytorch",
"gpt_neox",
"region:us"
] | null | 2023-07-08T01:59:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Wiryan/imryan | Wiryan | 2023-07-15T04:27:51Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-15T04:22:48Z | ---
license: creativeml-openrail-m
---
|
RoundtTble/dinov2_vitl14_onnx | RoundtTble | 2023-07-15T04:16:19Z | 0 | 0 | null | [
"onnx",
"region:us"
] | null | 2023-07-02T02:18:01Z | # dinov2_vitl14_onnx
## Run Triton
```
make triton
```
```
=============================
== Triton Inference Server ==
=============================
NVIDIA Release 23.04 (build 58408265)
Triton Server Version 2.33.0
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
NOTE: CUDA Forward Compatibility mode ENABLED.
Using CUDA 12.1 driver version 530.30.02 with kernel driver version 525.125.06.
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
I0715 04:13:59.173070 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f1a70000000' with size 268435456
I0715 04:13:59.173293 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0715 04:13:59.175108 1 model_lifecycle.cc:459] loading: dinov2_vitl14:1
I0715 04:13:59.177471 1 onnxruntime.cc:2504] TRITONBACKEND_Initialize: onnxruntime
I0715 04:13:59.177510 1 onnxruntime.cc:2514] Triton TRITONBACKEND API version: 1.12
I0715 04:13:59.177518 1 onnxruntime.cc:2520] 'onnxruntime' TRITONBACKEND API version: 1.12
I0715 04:13:59.177525 1 onnxruntime.cc:2550] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0715 04:13:59.233419 1 onnxruntime.cc:2608] TRITONBACKEND_ModelInitialize: dinov2_vitl14 (version 1)
I0715 04:13:59.233847 1 onnxruntime.cc:666] skipping model configuration auto-complete for 'dinov2_vitl14': inputs and outputs already specified
I0715 04:13:59.234233 1 onnxruntime.cc:2651] TRITONBACKEND_ModelInstanceInitialize: dinov2_vitl14_0 (GPU device 0)
2023-07-15 04:13:59.546824126 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-07-15 04:13:59.546847104 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I0715 04:14:00.851748 1 model_lifecycle.cc:694] successfully loaded 'dinov2_vitl14' version 1
I0715 04:14:00.851859 1 server.cc:583]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0715 04:14:00.851944 1 server.cc:610]
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend | Path | Config |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}} |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0715 04:14:00.852005 1 server.cc:653]
+---------------+---------+--------+
| Model | Version | Status |
+---------------+---------+--------+
| dinov2_vitl14 | 1 | READY |
+---------------+---------+--------+
I0715 04:14:00.872645 1 metrics.cc:808] Collecting metrics for GPU 0: NVIDIA RTX A4000
I0715 04:14:00.873026 1 metrics.cc:701] Collecting CPU metrics
I0715 04:14:00.873315 1 tritonserver.cc:2387]
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.33.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0] | /models |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
| cache_enabled | 0 |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0715 04:14:00.875498 1 grpc_server.cc:2450] Started GRPCInferenceService at 0.0.0.0:8001
I0715 04:14:00.875964 1 http_server.cc:3555] Started HTTPService at 0.0.0.0:8000
I0715 04:14:00.917871 1 http_server.cc:185] Started Metrics Service at 0.0.0.0:8002
```
## Perf Analyzer
```
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:23.04-py3-sdk perf_analyzer -m dinov2_vitl14 --percentile=95 -i grpc -u 0.0.0.0:8001 --concurrency-range 16:16 --shape input:3,560,560
=================================
== Triton Inference Server SDK ==
=================================
NVIDIA Release 23.04 (build 58408269)
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
NOTE: CUDA Forward Compatibility mode ENABLED.
Using CUDA 12.1 driver version 530.30.02 with kernel driver version 525.125.06.
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
*** Measurement Settings ***
Batch size: 1
Service Kind: Triton
Using "time_windows" mode for stabilization
Measurement window: 5000 msec
Latency limit: 0 msec
Concurrency limit: 16 concurrent requests
Using synchronous calls for inference
Stabilizing using p95 latency
Request concurrency: 16
Client:
Request count: 881
Throughput: 48.927 infer/sec
p50 latency: 324015 usec
p90 latency: 330275 usec
p95 latency: 331952 usec
p99 latency: 336638 usec
Avg gRPC time: 323066 usec ((un)marshal request/response 953 usec + response wait 322113 usec)
Server:
Inference count: 881
Execution count: 111
Successful request count: 881
Avg request latency: 313673 usec (overhead 7065 usec + queue 151785 usec + compute input 7582 usec + compute infer 143162 usec + compute output 4077 usec)
Inferences/Second vs. Client p95 Batch Latency
Concurrency: 16, throughput: 48.927 infer/sec, latency 331952 usec
```
|
mittalashish/chique7 | mittalashish | 2023-07-15T04:11:30Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-15T04:08:44Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: <Chique>
---
### chique7 Dreambooth model trained by mittalashish with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v2-1-512 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
<Chique> (use that on your prompt)

|
jerryjalapeno/nart-100k-7b | jerryjalapeno | 2023-07-15T03:57:11Z | 1,520 | 20 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-14T19:01:46Z | ---
license: cc-by-nc-nd-4.0
---
|
renatostrianese/q-Taxi-v3 | renatostrianese | 2023-07-15T03:48:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T03:48:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="renatostrianese/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
renatostrianese/q-FrozenLake-v1-4x4-noSlippery | renatostrianese | 2023-07-15T03:43:44Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T03:43:33Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="renatostrianese/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
photonmz/distilbert-base-uncased-finetuned-emotion | photonmz | 2023-07-15T03:33:06Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-15T03:10:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9275012469136824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 |
| 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
crumb/opentinystories-68m-complex | crumb | 2023-07-15T03:25:24Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"dataset:crumb/flan-ul2-tinystories-complex",
"dataset:crumb/flan-ul2-tinystories",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T09:16:03Z | ---
datasets:
- crumb/flan-ul2-tinystories-complex
- crumb/flan-ul2-tinystories
---
test loss 2.669290 on crumb/flan-ul2-tinystories-complex, initialized from crumb/opentinystories-30m-base, 2 epochs, linear decreasing lr 1e-4. trained with double the batch size (256) |
Subsets and Splits