modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-05-28 18:26:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 477
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-05-28 18:24:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ScottShao/falcon-200steps-4bit-finetunined-sxl-20230811 | ScottShao | 2023-08-11T06:15:13Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T06:14:14Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
byxy/bert_sc | byxy | 2023-08-11T06:05:21Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-10T08:38:28Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: byxy/bert_sc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# byxy/bert_sc
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0658
- Validation Loss: 2.1769
- Train Accuracy: 0.6522
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1248, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.0658 | 2.1769 | 0.6522 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.10.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
wangxso/dqn-SpaceInvadersNoFrameskip-v4 | wangxso | 2023-08-11T05:58:14Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T08:56:25Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 100.50 +/- 57.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wangxso -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wangxso -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga wangxso
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Evan-Lin/Bart-abs-yelp-allure-10 | Evan-Lin | 2023-08-11T05:57:13Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2023-08-08T07:00:58Z | ---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmps_u5v_y3/Evan-Lin/Bart-abs-yelp-allure-10")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmps_u5v_y3/Evan-Lin/Bart-abs-yelp-allure-10")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmps_u5v_y3/Evan-Lin/Bart-abs-yelp-allure-10")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
mambazjp/REC-MV_preprocess | mambazjp | 2023-08-11T05:50:29Z | 0 | 4 | null | [
"region:us"
] | null | 2023-08-02T15:21:16Z | # This is the tutorial of data processing of [REC-MV](https://github.com/GAP-LAB-CUHK-SZ/REC-MV).
The data pre-processing part includes img, mask, normal, parsing (garment segmentation), camera, smpl parameters (beta & theta), featurelines, skinning weight.
## Step0
set up the environment (or you can directly use REC-MV environment)
```
pip install -r requirements.txt
```
## Step1
You should make directory to save all processed data, named, to say, xiaoming.
And you turn the video into images:
```
encodepngffmpeg()
{
# $1: target folder
# $2: save video name
ffmpeg -r ${1} -pattern_type glob -i '*.png' -vcodec libx264 -crf 18 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -pix_fmt yuv420p ${2}
}
encodepngffmpeg 30 ./xiaoming.mp4
```
Then, your data directory:
```
xiaoming/
└── imgs
```
## Step2 Normal, Parsing, and Mask
Get the normal map, parsing mask, masks.
```
python prcess_data_all.py --gid <gpu_id> --root <Your data root> --gender <data gender>
# example
python prcess_data_all.py --gid 0 --root /data/xiaoming --gender male
```
Your data directory:
```
xiaoming/
├── imgs
├── masks
├── normals
└── parsing_SCH_ATR
```
## Step3 SMPL & Camera
To get smpl paramaters (pose and shape), here we use [videoavatar](https://github.com/thmoa/videoavatars):
- Set up the env (**Note it use python2**)
- Prepare keypoints files for each frame in the video and put them under `xiaoming/openpose`, which I use [Openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose).
- Run three python files in videoavatars/prepare_data, you'll get `keypoints.hdf5, masks.hdf5, camera.hdf5.` Or you can just use my script: ```cd videoavatars; python get_reconstructed_poses.py --root xiaoming --out xiaoming --gender male```
- `bash run_step1.sh`
After you run through videoavatar, you will get `camera.pkl, reconstructed_poses.hdf5`. Put it also under the root(xiaoming).
You can get `smpl_rec.npz, camera.npz` by running:
```
python get_smpl_rec_camera.py --root xiaoming --save_root xiaoming --gender male
```
**Note: You can use any other smpl estimation algorithm, but you should follow the way how smpl_rec.npz save pose, shape, and trans.**
## Step4 Skining Weight
We follow [fite](https://github.com/jsnln/fite) to get the lbs skinning weight to prevent artifacts.
In fite's readme, you'll get a skining weight cube after finishing 3.Diffused Skinning. Name it `diffused_skinning_weights.npy` and put it under xiaoming.
|
MStarn/ppo-LunarLander-V2i | MStarn | 2023-08-11T05:46:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-11T05:46:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.33 +/- 15.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
satani/two | satani | 2023-08-11T05:35:23Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-11T05:29:21Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### two Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
GMGowtham/results | GMGowtham | 2023-08-11T05:35:18Z | 0 | 0 | null | [
"generated_from_trainer",
"region:us"
] | null | 2023-08-11T05:25:40Z | ---
base_model: NousResearch/llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/llama-2-7b-chat-hf](https://huggingface.co/NousResearch/llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
MStarn/ppo-LunarLander-Unit1 | MStarn | 2023-08-11T05:15:06Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-11T05:14:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 180.18 +/- 103.84
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jade1211/textual_inversion_cat | Jade1211 | 2023-08-11T05:03:21Z | 104 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-23T18:26:24Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
SergeantFetus/Anton_Strasser_Red_Orchestra_2_Announcer | SergeantFetus | 2023-08-11T04:58:20Z | 0 | 0 | null | [
"German",
"English",
"Male",
"Old",
"en",
"de",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-08-11T04:43:17Z | ---
license: cc-by-sa-4.0
language:
- en
- de
tags:
- German
- English
- Male
- Old
--- |
jsgao/bert-eli5c-retriever | jsgao | 2023-08-11T04:51:37Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"en",
"dataset:eli5_category",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language: en
license: MIT
datasets:
- eli5_category
---
Document Retriever model of [ELI5-Category Dataset](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/), need additional projection layer (see GitHub [repo](https://github.com/rexarski/ANLY580-final-project/blob/main/model_deploy/models/eli5c_qa_model.py)) |
rokset3/kazroberta-80kstep | rokset3 | 2023-08-11T04:48:32Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T04:36:37Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
GMGowtham/alpaca7B-lora | GMGowtham | 2023-08-11T04:47:56Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T08:07:06Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Yorth/poetry-lora | Yorth | 2023-08-11T04:23:50Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T04:23:49Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
smjain/abap-nous-hermes | smjain | 2023-08-11T04:23:09Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:smjain/abap",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-11T01:38:14Z | ---
license: apache-2.0
datasets:
- smjain/abap
language:
- en
---
This model is fine tuned on a very small ABAP dataset . Have used NousResearch/Llama-2-7b-chat-hf as the base model.
Sample code
from transformers import pipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "smjain/abap-nous-hermes"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained('NousResearch/llama-2-7b-chat-hf')
prompt = "Write a sample ABAP report" # change to your desired prompt
gen = pipeline('text-generation', model=model, tokenizer=tokenizer,max_new_tokens=256)
result = gen(prompt)
print(result[0]['generated_text']) |
imagineaiuser/llama2-qlora-finetuned-mental-health-test | imagineaiuser | 2023-08-11T04:21:45Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T04:21:28Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
BabaYaga048/Pixelcopter-PLE-v2 | BabaYaga048 | 2023-08-11T03:59:03Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-11T03:59:01Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 8.10 +/- 6.58
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ckandemir/q-FrozenLake-v1-4x4-noSlippery | ckandemir | 2023-08-11T03:54:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-11T03:54:44Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="ckandemir/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
ScottShao/llama2-7b-150steps-8bit-finetunined-sxl | ScottShao | 2023-08-11T03:41:31Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T03:41:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
DylanJHJ/bert-base-final-v0 | DylanJHJ | 2023-08-11T03:18:46Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-11T02:35:28Z | Found. Redirecting to https://cdn-lfs.hf.co/repos/6d/bf/6dbff45574bbd9f077aa60e5ccc8adf78931ddbf42fa6fa660045c49522bec55/bd24b0e53f277ffcf4408f6ee9715ca431be3583ed6c9ab91b5a1a68e660003a?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27README.md%3B+filename%3D%22README.md%22%3B&response-content-type=text%2Fmarkdown&Expires=1739271049&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTI3MTA0OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9yZXBvcy82ZC9iZi82ZGJmZjQ1NTc0YmJkOWYwNzdhYTYwZTVjY2M4YWRmNzg5MzFkZGJmNDJmYTZmYTY2MDA0NWM0OTUyMmJlYzU1L2JkMjRiMGU1M2YyNzdmZmNmNDQwOGY2ZWU5NzE1Y2E0MzFiZTM1ODNlZDZjOWFiOTFiNWExYTY4ZTY2MDAwM2E%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=F266jHE1LyoChydVUGIgwgKCdHNivY%7EzagTdGEDFf89%7EAS%7E1x371k2fUZlpTm8Uy6eVjFZqQOaVQrH0q%7EJD0f83Hphe7Hqccoqk3xM6%7EKWpZRtDfITPwHMn67s1YgHNKy6QCtmeP4pcV7bCXP93OaoMCJ-%7EkW2ZKQfAnkVcERXnI--wfpFweOGe0n-My-UWFf4mt2-2cek2da9ftqsJYKgHpYbnSxNU0Y0NAp7q%7E6SAFalLujhAasFFGGIMHm5VA1ZxJpu8s6tav8rbmf0soxDkpO3Vr3m4Emxo9uNeUH9ic0gf6ihMiurCt%7EAn8Wmr0y6Kwx2Cchm7KuBK797MC5Q__&Key-Pair-Id=K3RPWS32NSSJCE |
Carmesix/finetuning-sentiment-model-3000-samples | Carmesix | 2023-08-11T02:58:52Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-11T01:32:50Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9088888888888889
- name: F1
type: f1
value: 0.9078651685393259
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2753
- Accuracy: 0.9089
- F1: 0.9079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Marco-Cheung/whisper-tiny-en | Marco-Cheung | 2023-08-11T02:58:05Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-11T02:41:28Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[451:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3500298151460942
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6528
- Wer Ortho: 0.3529
- Wer: 0.3500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0011 | 17.24 | 500 | 0.6528 | 0.3529 | 0.3500 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
afterless/reverse-pythia-160m | afterless | 2023-08-11T02:29:52Z | 180 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"Text Generation",
"causal-lm",
"en",
"dataset:EleutherAI/pile",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-02T21:25:59Z | ---
datasets:
- EleutherAI/pile
language:
- en
tags:
- Text Generation
- pytorch
- causal-lm
---
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"afterless/reverse-pythia-160m"
)
model = GPTNeoXForCausalLM.from_pretrained(
"afterless/reverse-pythia-160m"
)
inputs = tokenizer(
"but I told him, the cheese was the best",
return_token_type_ids=False,
return_tensors="pt"
)
inputs['input_ids'] = t.flip(inputs.input_ids, (1,))
tokens = t.flip(model.generate(**inputs), (1,))
tokenizer.decode(tokens[0])
``` |
TheRains/cv9-special-batch8-lr4-small | TheRains | 2023-08-11T02:16:48Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_9_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-09T08:48:28Z | ---
language:
- id
license: apache-2.0
base_model: openai/whisper-small
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_9_0
metrics:
- wer
model-index:
- name: Whisper Small Indonesian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_9_0 id
type: mozilla-foundation/common_voice_9_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 17.437313089487002
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Indonesian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_9_0 id dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4278
- Wer: 17.4373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6566 | 0.97 | 1000 | 0.6284 | 31.7276 |
| 0.3418 | 1.94 | 2000 | 0.5210 | 25.4382 |
| 0.1133 | 2.9 | 3000 | 0.4795 | 22.9216 |
| 0.046 | 3.87 | 4000 | 0.4513 | 19.8712 |
| 0.0088 | 4.84 | 5000 | 0.4278 | 17.4373 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tianpf/chinese-alpaca-2-qlora-finetunined-law | tianpf | 2023-08-11T02:09:55Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T02:09:51Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
YassineBenlaria/wav2vec2-large-xlsr-53_tamasheq_french | YassineBenlaria | 2023-08-11T01:57:46Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-10T22:03:07Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-53_tamasheq_french
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_tamasheq_french
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8742
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.659 | 4.0 | 400 | 2.9280 | 1.0 |
| 2.8942 | 8.0 | 800 | 2.8886 | 1.0 |
| 2.8877 | 12.0 | 1200 | 2.8671 | 1.0 |
| 2.8814 | 16.0 | 1600 | 2.8593 | 1.0 |
| 2.8779 | 20.0 | 2000 | 2.8615 | 1.0 |
| 2.914 | 24.0 | 2400 | 2.9140 | 1.0 |
| 2.8965 | 28.0 | 2800 | 2.8742 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TriDat/squad-bloom-3b | TriDat | 2023-08-11T01:50:01Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T01:49:56Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl-cdip-small_rvl_cdip-NK1000_kd_NKD_t1.0_g1.5_rand | jordyvl | 2023-08-11T01:36:43Z | 164 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-08-10T16:06:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip-small_rvl_cdip-NK1000_kd_NKD_t1.0_g1.5_rand
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip-small_rvl_cdip-NK1000_kd_NKD_t1.0_g1.5_rand
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1637
- Accuracy: 0.6275
- Brier Loss: 0.6026
- Nll: 2.9068
- F1 Micro: 0.6275
- F1 Macro: 0.6313
- Ece: 0.2499
- Aurc: 0.1609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 6.3322 | 1.0 | 1000 | 6.0794 | 0.1835 | 0.8928 | 6.5679 | 0.1835 | 0.1322 | 0.0627 | 0.6846 |
| 5.8198 | 2.0 | 2000 | 5.5963 | 0.3668 | 0.7821 | 3.5543 | 0.3668 | 0.3217 | 0.0967 | 0.4448 |
| 5.53 | 3.0 | 3000 | 5.4184 | 0.4225 | 0.7382 | 3.4217 | 0.4225 | 0.3848 | 0.1087 | 0.3778 |
| 5.3449 | 4.0 | 4000 | 5.1895 | 0.4655 | 0.6813 | 3.0794 | 0.4655 | 0.4562 | 0.1076 | 0.3029 |
| 5.2467 | 5.0 | 5000 | 5.1813 | 0.4592 | 0.6845 | 2.9944 | 0.4592 | 0.4430 | 0.1009 | 0.3125 |
| 5.1382 | 6.0 | 6000 | 5.0102 | 0.4998 | 0.6423 | 2.7804 | 0.4998 | 0.4926 | 0.1013 | 0.2660 |
| 5.0255 | 7.0 | 7000 | 4.9611 | 0.501 | 0.6350 | 2.7692 | 0.501 | 0.5085 | 0.0795 | 0.2690 |
| 4.9089 | 8.0 | 8000 | 4.9327 | 0.508 | 0.6204 | 2.6580 | 0.508 | 0.5068 | 0.0622 | 0.2565 |
| 4.8337 | 9.0 | 9000 | 4.8324 | 0.5467 | 0.5866 | 2.5636 | 0.5467 | 0.5419 | 0.0642 | 0.2274 |
| 4.747 | 10.0 | 10000 | 5.0170 | 0.5302 | 0.6080 | 2.7672 | 0.5302 | 0.5193 | 0.0622 | 0.2452 |
| 4.622 | 11.0 | 11000 | 4.8259 | 0.5593 | 0.5709 | 2.6791 | 0.5593 | 0.5520 | 0.0619 | 0.2090 |
| 4.5449 | 12.0 | 12000 | 4.7696 | 0.5675 | 0.5583 | 2.5273 | 0.5675 | 0.5678 | 0.0541 | 0.2016 |
| 4.447 | 13.0 | 13000 | 4.8718 | 0.5575 | 0.5775 | 2.7597 | 0.5575 | 0.5557 | 0.0575 | 0.2142 |
| 4.341 | 14.0 | 14000 | 4.7644 | 0.5897 | 0.5368 | 2.5797 | 0.5897 | 0.5930 | 0.0560 | 0.1835 |
| 4.2476 | 15.0 | 15000 | 4.8339 | 0.5905 | 0.5485 | 2.6684 | 0.5905 | 0.5903 | 0.0719 | 0.1872 |
| 4.1592 | 16.0 | 16000 | 4.7828 | 0.5877 | 0.5456 | 2.7300 | 0.5877 | 0.5877 | 0.0784 | 0.1832 |
| 4.0513 | 17.0 | 17000 | 4.8771 | 0.5885 | 0.5533 | 2.9097 | 0.5885 | 0.5930 | 0.0965 | 0.1867 |
| 3.9646 | 18.0 | 18000 | 4.8980 | 0.596 | 0.5499 | 2.8383 | 0.596 | 0.5948 | 0.1025 | 0.1797 |
| 3.8768 | 19.0 | 19000 | 4.9787 | 0.605 | 0.5551 | 2.8903 | 0.605 | 0.6050 | 0.1302 | 0.1765 |
| 3.7739 | 20.0 | 20000 | 5.1202 | 0.5945 | 0.5727 | 3.0393 | 0.5945 | 0.5935 | 0.1493 | 0.1821 |
| 3.7023 | 21.0 | 21000 | 5.1879 | 0.5998 | 0.5785 | 2.9570 | 0.5998 | 0.5991 | 0.1690 | 0.1807 |
| 3.6301 | 22.0 | 22000 | 5.2707 | 0.5933 | 0.5908 | 3.1177 | 0.5933 | 0.5971 | 0.1863 | 0.1829 |
| 3.5857 | 23.0 | 23000 | 5.2522 | 0.5887 | 0.5994 | 3.2051 | 0.5887 | 0.5949 | 0.1928 | 0.1857 |
| 3.5256 | 24.0 | 24000 | 5.3443 | 0.6102 | 0.5857 | 2.9687 | 0.6102 | 0.6084 | 0.1953 | 0.1760 |
| 3.4954 | 25.0 | 25000 | 5.3010 | 0.6045 | 0.5874 | 3.0184 | 0.6045 | 0.6053 | 0.1851 | 0.1807 |
| 3.46 | 26.0 | 26000 | 5.4451 | 0.5992 | 0.5994 | 3.0539 | 0.5992 | 0.6033 | 0.2053 | 0.1819 |
| 3.4086 | 27.0 | 27000 | 5.4299 | 0.608 | 0.5913 | 3.1127 | 0.608 | 0.6082 | 0.2027 | 0.1751 |
| 3.3769 | 28.0 | 28000 | 5.6979 | 0.601 | 0.6236 | 3.1077 | 0.601 | 0.6024 | 0.2396 | 0.1777 |
| 3.3238 | 29.0 | 29000 | 5.6090 | 0.611 | 0.6013 | 3.0875 | 0.611 | 0.6114 | 0.2238 | 0.1729 |
| 3.3011 | 30.0 | 30000 | 5.6356 | 0.6105 | 0.5991 | 2.9450 | 0.6105 | 0.6123 | 0.2243 | 0.1719 |
| 3.2708 | 31.0 | 31000 | 5.7634 | 0.604 | 0.6181 | 2.9119 | 0.604 | 0.6075 | 0.2402 | 0.1771 |
| 3.2556 | 32.0 | 32000 | 5.7042 | 0.617 | 0.6002 | 2.9324 | 0.617 | 0.6199 | 0.2263 | 0.1740 |
| 3.2213 | 33.0 | 33000 | 5.7388 | 0.603 | 0.6121 | 2.9240 | 0.603 | 0.6108 | 0.2345 | 0.1782 |
| 3.2138 | 34.0 | 34000 | 5.8008 | 0.6218 | 0.6001 | 2.9209 | 0.6218 | 0.6206 | 0.2284 | 0.1701 |
| 3.1994 | 35.0 | 35000 | 5.7350 | 0.6142 | 0.5967 | 2.9021 | 0.6142 | 0.6147 | 0.2294 | 0.1688 |
| 3.1776 | 36.0 | 36000 | 5.7487 | 0.609 | 0.6032 | 2.8651 | 0.609 | 0.6121 | 0.2329 | 0.1689 |
| 3.1606 | 37.0 | 37000 | 5.8022 | 0.6165 | 0.6075 | 2.8604 | 0.6165 | 0.6189 | 0.2398 | 0.1677 |
| 3.1405 | 38.0 | 38000 | 5.8133 | 0.6235 | 0.5949 | 2.8775 | 0.6235 | 0.6272 | 0.2319 | 0.1640 |
| 3.132 | 39.0 | 39000 | 5.8934 | 0.6232 | 0.5974 | 2.9324 | 0.6232 | 0.6274 | 0.2389 | 0.1639 |
| 3.1303 | 40.0 | 40000 | 5.8902 | 0.6288 | 0.5947 | 2.9049 | 0.6288 | 0.6322 | 0.2344 | 0.1634 |
| 3.1187 | 41.0 | 41000 | 5.9076 | 0.6215 | 0.5987 | 2.8584 | 0.6215 | 0.6261 | 0.2394 | 0.1630 |
| 3.0969 | 42.0 | 42000 | 5.9469 | 0.6265 | 0.5984 | 2.8509 | 0.6265 | 0.6309 | 0.2375 | 0.1631 |
| 3.0964 | 43.0 | 43000 | 5.9442 | 0.6252 | 0.5951 | 2.9309 | 0.6252 | 0.6291 | 0.2397 | 0.1607 |
| 3.0953 | 44.0 | 44000 | 6.0126 | 0.6238 | 0.5998 | 2.8956 | 0.6238 | 0.6274 | 0.2419 | 0.1630 |
| 3.0904 | 45.0 | 45000 | 6.0602 | 0.6295 | 0.5991 | 2.8669 | 0.6295 | 0.6334 | 0.2417 | 0.1609 |
| 3.0794 | 46.0 | 46000 | 6.0782 | 0.6282 | 0.6027 | 2.8830 | 0.6282 | 0.6321 | 0.2442 | 0.1616 |
| 3.0788 | 47.0 | 47000 | 6.1062 | 0.6275 | 0.6003 | 2.8472 | 0.6275 | 0.6316 | 0.2471 | 0.1610 |
| 3.0802 | 48.0 | 48000 | 6.1079 | 0.6285 | 0.5998 | 2.8916 | 0.6285 | 0.6322 | 0.2465 | 0.1600 |
| 3.0644 | 49.0 | 49000 | 6.1569 | 0.6275 | 0.6025 | 2.8941 | 0.6275 | 0.6314 | 0.2497 | 0.1610 |
| 3.0751 | 50.0 | 50000 | 6.1637 | 0.6275 | 0.6026 | 2.9068 | 0.6275 | 0.6313 | 0.2499 | 0.1609 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Ian-14/llm13 | Ian-14 | 2023-08-11T01:30:01Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"conversational",
"custom_code",
"zh",
"en",
"arxiv:2103.10360",
"arxiv:2210.02414",
"arxiv:1911.02150",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-10T11:10:47Z | ---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
pipeline_tag: conversational
---
# ChatGLM2-6B
<p align="center">
💻 <a href="https://github.com/THUDM/ChatGLM2-6B" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2103.10360" target="_blank">[GLM@ACL 22]</a> <a href="https://github.com/THUDM/GLM" target="_blank">[GitHub]</a> • 📃 <a href="https://arxiv.org/abs/2210.02414" target="_blank">[GLM-130B@ICLR 23]</a> <a href="https://github.com/THUDM/GLM-130B" target="_blank">[GitHub]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://join.slack.com/t/chatglm/shared_invite/zt-1y7pqoloy-9b1g6T6JjA8J0KxvUjbwJw" target="_blank">Slack</a> and <a href="https://github.com/THUDM/ChatGLM-6B/blob/main/resources/WECHAT.md" target="_blank">WeChat</a>
</p>
## 介绍
ChatGLM**2**-6B 是开源中英双语对话模型 [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) 的第二代版本,在保留了初代模型对话流畅、部署门槛较低等众多优秀特性的基础之上,ChatGLM**2**-6B 引入了如下新特性:
1. **更强大的性能**:基于 ChatGLM 初代模型的开发经验,我们全面升级了 ChatGLM2-6B 的基座模型。ChatGLM2-6B 使用了 [GLM](https://github.com/THUDM/GLM) 的混合目标函数,经过了 1.4T 中英标识符的预训练与人类偏好对齐训练,[评测结果](#评测结果)显示,相比于初代模型,ChatGLM2-6B 在 MMLU(+23%)、CEval(+33%)、GSM8K(+571%) 、BBH(+60%)等数据集上的性能取得了大幅度的提升,在同尺寸开源模型中具有较强的竞争力。
2. **更长的上下文**:基于 [FlashAttention](https://github.com/HazyResearch/flash-attention) 技术,我们将基座模型的上下文长度(Context Length)由 ChatGLM-6B 的 2K 扩展到了 32K,并在对话阶段使用 8K 的上下文长度训练,允许更多轮次的对话。但当前版本的 ChatGLM2-6B 对单轮超长文档的理解能力有限,我们会在后续迭代升级中着重进行优化。
3. **更高效的推理**:基于 [Multi-Query Attention](http://arxiv.org/abs/1911.02150) 技术,ChatGLM2-6B 有更高效的推理速度和更低的显存占用:在官方的模型实现下,推理速度相比初代提升了 42%,INT4 量化下,6G 显存支持的对话长度由 1K 提升到了 8K。
ChatGLM**2**-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B). It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the following new features:
1. **Stronger Performance**: Based on the development experience of the first-generation ChatGLM model, we have fully upgraded the base model of ChatGLM2-6B. ChatGLM2-6B uses the hybrid objective function of [GLM](https://github.com/THUDM/GLM), and has undergone pre-training with 1.4T bilingual tokens and human preference alignment training. The [evaluation results](README.md#evaluation-results) show that, compared to the first-generation model, ChatGLM2-6B has achieved substantial improvements in performance on datasets like MMLU (+23%), CEval (+33%), GSM8K (+571%), BBH (+60%), showing strong competitiveness among models of the same size.
2. **Longer Context**: Based on [FlashAttention](https://github.com/HazyResearch/flash-attention) technique, we have extended the context length of the base model from 2K in ChatGLM-6B to 32K, and trained with a context length of 8K during the dialogue alignment, allowing for more rounds of dialogue. However, the current version of ChatGLM2-6B has limited understanding of single-round ultra-long documents, which we will focus on optimizing in future iterations.
3. **More Efficient Inference**: Based on [Multi-Query Attention](http://arxiv.org/abs/1911.02150) technique, ChatGLM2-6B has more efficient inference speed and lower GPU memory usage: under the official implementation, the inference speed has increased by 42% compared to the first generation; under INT4 quantization, the dialogue length supported by 6G GPU memory has increased from 1K to 8K.
## 软件依赖
```shell
pip install protobuf transformers==4.30.2 cpm_kernels torch>=2.0 gradio mdtex2html sentencepiece accelerate
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b-int4", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm2-6b-int4", trust_remote_code=True).half().cuda()
model = model.eval()
response, history = model.chat(tokenizer, "你好", history=[])
response
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考我们的 [Github Repo](https://github.com/THUDM/ChatGLM2-6B)。
For more instructions, including how to run CLI and web demos, and model quantization, please refer to our [Github Repo](https://github.com/THUDM/ChatGLM2-6B).
## Change Log
* v1.0
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM2-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得我们的工作有帮助的话,请考虑引用下列论文,ChatGLM2-6B 的论文会在近期公布,尽情期待~
```
@article{zeng2022glm,
title={Glm-130b: An open bilingual pre-trained model},
author={Zeng, Aohan and Liu, Xiao and Du, Zhengxiao and Wang, Zihan and Lai, Hanyu and Ding, Ming and Yang, Zhuoyi and Xu, Yifan and Zheng, Wendi and Xia, Xiao and others},
journal={arXiv preprint arXiv:2210.02414},
year={2022}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
``` |
mohsinshah/git-base-dummy-3 | mohsinshah | 2023-08-11T01:20:05Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"git",
"image-text-to-text",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-08-10T04:10:09Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: git-base-500img-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-500img-dataset
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4161
- Wer Score: 2.0379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.0698 | 3.23 | 50 | 4.5086 | 2.5298 |
| 2.6252 | 6.45 | 100 | 0.9823 | 2.2976 |
| 0.5497 | 9.68 | 150 | 0.4681 | 1.6707 |
| 0.2558 | 12.9 | 200 | 0.4162 | 1.7907 |
| 0.1551 | 16.13 | 250 | 0.4052 | 2.0984 |
| 0.1041 | 19.35 | 300 | 0.4054 | 2.0984 |
| 0.0764 | 22.58 | 350 | 0.4088 | 2.0576 |
| 0.0581 | 25.81 | 400 | 0.4054 | 2.0899 |
| 0.0462 | 29.03 | 450 | 0.4092 | 2.0484 |
| 0.0382 | 32.26 | 500 | 0.4118 | 2.1387 |
| 0.0329 | 35.48 | 550 | 0.4126 | 2.1315 |
| 0.0275 | 38.71 | 600 | 0.4139 | 2.0114 |
| 0.0255 | 41.94 | 650 | 0.4173 | 2.0098 |
| 0.0234 | 45.16 | 700 | 0.4155 | 2.0206 |
| 0.0226 | 48.39 | 750 | 0.4161 | 2.0379 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
nlagosr/my_awesome_model | nlagosr | 2023-08-11T00:49:06Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-10T20:10:51Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: nlagosr/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlagosr/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6903
- Validation Loss: 0.7028
- Train Accuracy: 0.4
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 25, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6986 | 0.7027 | 0.4 | 0 |
| 0.6886 | 0.7029 | 0.4 | 1 |
| 0.6903 | 0.7028 | 0.4 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
whywynn/Reinforce-Pixelcopter-PLE-v0 | whywynn | 2023-08-11T00:35:00Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T21:25:12Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 28.90 +/- 30.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
StevenLe456/viet_tones_model | StevenLe456 | 2023-08-11T00:20:12Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"base_model:finetune:nguyenvulebinh/wav2vec2-base-vietnamese-250h",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-10T02:15:25Z | ---
license: cc-by-nc-4.0
base_model: nguyenvulebinh/wav2vec2-base-vietnamese-250h
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: viet_tones_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# viet_tones_model
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9783
- Accuracy: 0.5972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 110
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.89 | 6 | 1.7955 | 0.1296 |
| 1.7924 | 1.93 | 13 | 1.7938 | 0.1343 |
| 1.7919 | 2.96 | 20 | 1.7916 | 0.2037 |
| 1.7919 | 4.0 | 27 | 1.7907 | 0.1713 |
| 1.7903 | 4.89 | 33 | 1.7886 | 0.1852 |
| 1.7883 | 5.93 | 40 | 1.7798 | 0.2269 |
| 1.7883 | 6.96 | 47 | 1.7487 | 0.25 |
| 1.7717 | 8.0 | 54 | 1.7104 | 0.2407 |
| 1.726 | 8.89 | 60 | 1.6488 | 0.2685 |
| 1.726 | 9.93 | 67 | 1.5835 | 0.2731 |
| 1.6651 | 10.96 | 74 | 1.6020 | 0.2778 |
| 1.6332 | 12.0 | 81 | 1.5351 | 0.2778 |
| 1.6332 | 12.89 | 87 | 1.4977 | 0.2963 |
| 1.5708 | 13.93 | 94 | 1.4903 | 0.2870 |
| 1.5543 | 14.96 | 101 | 1.4671 | 0.2731 |
| 1.5543 | 16.0 | 108 | 1.3992 | 0.3194 |
| 1.4872 | 16.89 | 114 | 1.3854 | 0.3009 |
| 1.4861 | 17.93 | 121 | 1.3411 | 0.3426 |
| 1.4861 | 18.96 | 128 | 1.3142 | 0.3472 |
| 1.4281 | 20.0 | 135 | 1.3021 | 0.4259 |
| 1.38 | 20.89 | 141 | 1.2657 | 0.4028 |
| 1.38 | 21.93 | 148 | 1.2372 | 0.4352 |
| 1.3472 | 22.96 | 155 | 1.2341 | 0.4815 |
| 1.3029 | 24.0 | 162 | 1.1815 | 0.4306 |
| 1.3029 | 24.89 | 168 | 1.1797 | 0.4954 |
| 1.3042 | 25.93 | 175 | 1.1403 | 0.4583 |
| 1.281 | 26.96 | 182 | 1.1349 | 0.4722 |
| 1.281 | 28.0 | 189 | 1.1369 | 0.4907 |
| 1.2614 | 28.89 | 195 | 1.0999 | 0.4954 |
| 1.2133 | 29.93 | 202 | 1.1677 | 0.4676 |
| 1.2133 | 30.96 | 209 | 1.0785 | 0.5 |
| 1.2527 | 32.0 | 216 | 1.1092 | 0.4861 |
| 1.1722 | 32.89 | 222 | 1.0424 | 0.5185 |
| 1.1722 | 33.93 | 229 | 1.0791 | 0.4907 |
| 1.1225 | 34.96 | 236 | 1.0447 | 0.4907 |
| 1.1447 | 36.0 | 243 | 1.0777 | 0.4583 |
| 1.1447 | 36.89 | 249 | 1.0141 | 0.4954 |
| 1.1484 | 37.93 | 256 | 1.0196 | 0.5324 |
| 1.11 | 38.96 | 263 | 0.9791 | 0.5417 |
| 1.046 | 40.0 | 270 | 0.9798 | 0.5231 |
| 1.046 | 40.89 | 276 | 0.9366 | 0.5694 |
| 1.0582 | 41.93 | 283 | 0.9645 | 0.5602 |
| 1.0569 | 42.96 | 290 | 0.9764 | 0.5694 |
| 1.0569 | 44.0 | 297 | 1.0340 | 0.5324 |
| 1.028 | 44.89 | 303 | 0.9969 | 0.5463 |
| 1.04 | 45.93 | 310 | 1.0251 | 0.5185 |
| 1.04 | 46.96 | 317 | 1.0447 | 0.5417 |
| 0.9889 | 48.0 | 324 | 0.9487 | 0.5324 |
| 1.0055 | 48.89 | 330 | 1.0147 | 0.5 |
| 1.0055 | 49.93 | 337 | 1.0015 | 0.5046 |
| 0.9955 | 50.96 | 344 | 0.9763 | 0.5278 |
| 0.9382 | 52.0 | 351 | 1.0306 | 0.5278 |
| 0.9382 | 52.89 | 357 | 0.9970 | 0.5463 |
| 0.9601 | 53.93 | 364 | 0.9487 | 0.5741 |
| 0.9736 | 54.96 | 371 | 0.9658 | 0.5463 |
| 0.9736 | 56.0 | 378 | 0.9789 | 0.5602 |
| 0.9237 | 56.89 | 384 | 0.9940 | 0.5463 |
| 0.9588 | 57.93 | 391 | 0.9778 | 0.5463 |
| 0.9588 | 58.96 | 398 | 0.9789 | 0.5648 |
| 0.9393 | 60.0 | 405 | 0.9612 | 0.5602 |
| 0.9291 | 60.89 | 411 | 0.9141 | 0.5556 |
| 0.9291 | 61.93 | 418 | 0.9770 | 0.5463 |
| 0.929 | 62.96 | 425 | 0.9385 | 0.5556 |
| 0.9448 | 64.0 | 432 | 0.9504 | 0.5463 |
| 0.9448 | 64.89 | 438 | 0.9984 | 0.5463 |
| 0.9426 | 65.93 | 445 | 0.9228 | 0.5602 |
| 0.8949 | 66.96 | 452 | 0.9729 | 0.5509 |
| 0.8949 | 68.0 | 459 | 0.9825 | 0.5602 |
| 0.9041 | 68.89 | 465 | 0.9769 | 0.5509 |
| 0.8828 | 69.93 | 472 | 0.9914 | 0.5648 |
| 0.8828 | 70.96 | 479 | 0.9838 | 0.5509 |
| 0.8874 | 72.0 | 486 | 0.9646 | 0.5741 |
| 0.8723 | 72.89 | 492 | 1.0682 | 0.5324 |
| 0.8723 | 73.93 | 499 | 1.0629 | 0.5417 |
| 0.8953 | 74.96 | 506 | 0.9770 | 0.5648 |
| 0.879 | 76.0 | 513 | 1.0038 | 0.5787 |
| 0.879 | 76.89 | 519 | 1.0529 | 0.5648 |
| 0.896 | 77.93 | 526 | 1.0300 | 0.5602 |
| 0.8519 | 78.96 | 533 | 1.0451 | 0.5463 |
| 0.8414 | 80.0 | 540 | 1.0755 | 0.5509 |
| 0.8414 | 80.89 | 546 | 1.0287 | 0.5556 |
| 0.8342 | 81.93 | 553 | 1.0140 | 0.5602 |
| 0.8653 | 82.96 | 560 | 1.0787 | 0.5463 |
| 0.8653 | 84.0 | 567 | 1.0762 | 0.5509 |
| 0.8357 | 84.89 | 573 | 1.0307 | 0.5741 |
| 0.8455 | 85.93 | 580 | 1.0171 | 0.5648 |
| 0.8455 | 86.96 | 587 | 0.9886 | 0.5880 |
| 0.8238 | 88.0 | 594 | 0.9806 | 0.5741 |
| 0.8613 | 88.89 | 600 | 1.0177 | 0.5833 |
| 0.8613 | 89.93 | 607 | 1.0273 | 0.5602 |
| 0.8265 | 90.96 | 614 | 0.9857 | 0.5926 |
| 0.831 | 92.0 | 621 | 0.9701 | 0.5972 |
| 0.831 | 92.89 | 627 | 0.9726 | 0.5972 |
| 0.8247 | 93.93 | 634 | 0.9765 | 0.5880 |
| 0.8041 | 94.96 | 641 | 0.9801 | 0.5926 |
| 0.8041 | 96.0 | 648 | 0.9796 | 0.5926 |
| 0.8387 | 96.89 | 654 | 0.9790 | 0.5972 |
| 0.7906 | 97.78 | 660 | 0.9783 | 0.5972 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nhat117/dica-llama2-13b-chat-hf-3 | nhat117 | 2023-08-11T00:14:46Z | 4 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T00:10:38Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
NateBenz/llama2-prompt-reformatting-generator | NateBenz | 2023-08-11T00:05:22Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-08T01:25:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
allenbc/Taxi-v3 | allenbc | 2023-08-11T00:03:51Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-11T00:03:48Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="allenbc/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
moisesrobles04/Reinforcement-CartPole-Unit4 | moisesrobles04 | 2023-08-10T23:46:37Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T23:46:27Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforcement-CartPole-Unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Jade1211/textual_inversion_bambi | Jade1211 | 2023-08-10T23:16:51Z | 3 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-10T22:26:08Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_bambi
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Yntec/ArcticFowl | Yntec | 2023-08-10T22:59:14Z | 267 | 4 | diffusers | [
"diffusers",
"safetensors",
"anime",
"art",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"ArcticFlamingo",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-09T20:38:20Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- anime
- art
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- ArcticFlamingo
---
This model with the Blessed2 VAE baked in.
Demo image by digiplay:

Samples and prompts:


Pretty cute girl. Thumbs up. Thumbs up. Thumbs up. Thumbs up. Thumbs up. Thumbs up. Acrylic art on canvas by ROSSDRAWS and Clay Mann and tyler edlin
Original pages:
https://civitai.com/models/16164?modelVersionId=84783
https://huggingface.co/NoCrypt/blessed_vae/tree/main |
dvs/videomae-base-finetuned-kinetics-finetuned-movienet-2-2 | dvs | 2023-08-10T22:36:32Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:dvs/videomae-base-finetuned-kinetics-finetuned-movienet-2",
"base_model:finetune:dvs/videomae-base-finetuned-kinetics-finetuned-movienet-2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-08-10T19:31:05Z | ---
license: cc-by-nc-4.0
base_model: dvs/videomae-base-finetuned-kinetics-finetuned-movienet-2
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-kinetics-finetuned-movienet-2-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-kinetics-finetuned-movienet-2-2
This model is a fine-tuned version of [dvs/videomae-base-finetuned-kinetics-finetuned-movienet-2](https://huggingface.co/dvs/videomae-base-finetuned-kinetics-finetuned-movienet-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4970
- eval_accuracy: 0.7552
- eval_runtime: 166.7089
- eval_samples_per_second: 1.152
- eval_steps_per_second: 0.144
- epoch: 6.0
- step: 1117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- training_steps: 1850
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vvonchain/lora-trained-xl-colab | vvonchain | 2023-08-10T22:14:39Z | 16 | 2 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-10T20:57:59Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - vvonchain/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
omersen/omer-model | omersen | 2023-08-10T22:13:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-10T19:04:26Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of person omer
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - omersen/omer-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of person omer using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Doctor-Shotgun/Chronos-Hermes-v2-13b-Limarp-Lora-Merged | Doctor-Shotgun | 2023-08-10T22:06:47Z | 13 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"en",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-03T07:47:25Z | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama-2
license: agpl-3.0
---
# Model Card: Chronos-Hermes-v2-13b-LIMARP-Lora-Merged
This is a Llama 2-based model consisting of Chronos Hermes v2 13b (https://huggingface.co/Austism/chronos-hermes-13b-v2) merged with LIMARP Lora (https://huggingface.co/lemonilia/limarp-llama2) using the now-updated standard lora adapter for LIMARP (July 28, 2023).
The intended objective was add some different roleplay flavor to the Chronos Hermes v2 model.
added_tokens.json was padded with dummy tokens to reach 32 added tokens in order to allow GGML conversion in llama.cpp without error due to vocab size mismatch.
## Usage:
Intended to be prompted either with the Alpaca instruction format of the base model:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
Or the LIMARP lora instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the base model and lora for details. |
abedininiaz/setfit-test-model | abedininiaz | 2023-08-10T22:05:39Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-08-09T19:58:24Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# abedininiaz/setfit-test-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("abedininiaz/setfit-test-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
adon81/dealFindr-finetuned | adon81 | 2023-08-10T22:04:15Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-10T21:43:12Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: dealFindr-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dealFindr-finetuned
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 6 | 13.2699 |
| No log | 2.0 | 12 | 12.2786 |
| No log | 3.0 | 18 | 11.5745 |
| No log | 4.0 | 24 | 10.8457 |
| No log | 5.0 | 30 | 9.8424 |
| No log | 6.0 | 36 | 8.5779 |
| No log | 7.0 | 42 | 6.9630 |
| No log | 8.0 | 48 | 6.1362 |
| No log | 9.0 | 54 | 5.6167 |
| No log | 10.0 | 60 | 5.3033 |
| No log | 11.0 | 66 | 5.0873 |
| No log | 12.0 | 72 | 4.8782 |
| No log | 13.0 | 78 | 4.7162 |
| No log | 14.0 | 84 | 4.6101 |
| No log | 15.0 | 90 | 4.5256 |
| No log | 16.0 | 96 | 4.4572 |
| No log | 17.0 | 102 | 4.4019 |
| No log | 18.0 | 108 | 4.3624 |
| No log | 19.0 | 114 | 4.3405 |
| No log | 20.0 | 120 | 4.3328 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
monideep2255/psst_batch_size_4_base_model | monideep2255 | 2023-08-10T21:59:11Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-10T20:20:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: psst_batch_size_4_base_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# psst_batch_size_4_base_model
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6743
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 14.1952 | 1.68 | 100 | 3.6352 |
| 3.9092 | 3.36 | 200 | 3.7223 |
| 3.9981 | 5.04 | 300 | 3.6864 |
| 3.7209 | 6.72 | 400 | 3.6310 |
| 3.9395 | 8.4 | 500 | 3.7229 |
| 3.7126 | 10.08 | 600 | 3.6163 |
| 3.6999 | 11.76 | 700 | 3.6776 |
| 3.7203 | 13.45 | 800 | 3.7568 |
| 3.7202 | 15.13 | 900 | 3.6998 |
| 3.7023 | 16.81 | 1000 | 3.6943 |
| 3.689 | 18.49 | 1100 | 3.6501 |
| 3.7009 | 20.17 | 1200 | 3.6973 |
| 3.6882 | 21.85 | 1300 | 3.6938 |
| 3.6907 | 23.53 | 1400 | 3.6795 |
| 3.6869 | 25.21 | 1500 | 3.6727 |
| 3.681 | 26.89 | 1600 | 3.6749 |
| 3.6968 | 28.57 | 1700 | 3.6743 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
monideep2255/psst_batch_size_16_base_model | monideep2255 | 2023-08-10T21:58:31Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-10T20:09:09Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: psst_batch_size_16_base_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# psst_batch_size_16_base_model
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 14.7691 | 6.67 | 100 | 3.7126 |
| 3.7857 | 13.33 | 200 | 3.6929 |
| 3.6981 | 20.0 | 300 | 3.6843 |
| 3.6883 | 26.67 | 400 | 3.6719 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
bilbo991/clip-roberta-100k | bilbo991 | 2023-08-10T21:50:57Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-08-10T19:30:22Z | ---
base_model: clip-roberta-100k
tags:
- generated_from_trainer
model-index:
- name: clip-roberta-100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-100k
This model is a fine-tuned version of [clip-roberta-100k](https://huggingface.co/clip-roberta-100k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4657 | 1.0 | 3125 | 3.4646 |
| 3.4658 | 2.0 | 6250 | 3.4646 |
| 3.4657 | 3.0 | 9375 | 3.4646 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.1
- Tokenizers 0.13.3
|
tbelote/CodeFixStarcoder | tbelote | 2023-08-10T21:19:22Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T21:19:17Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
elinas/chronos-13b-v2-GPTQ | elinas | 2023-08-10T21:05:20Z | 22 | 7 | transformers | [
"transformers",
"llama",
"text-generation",
"pytorch",
"chatbot",
"storywriting",
"generalist-model",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-02T19:01:49Z | ---
license: other
tags:
- llama
- pytorch
- chatbot
- storywriting
- generalist-model
---
# chronos-13b-v2
This is the 4bit GPTQ of **chronos-13b-v2** based on the **Llama v2 Base** model. It works with Exllama and AutoGPTQ.
This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.
This model uses Alpaca formatting, so for optimal model performance, use and either use a frontend like SillyTavern, or continue your story with it:
```
### Instruction:
Your instruction or question here.
### Response:
```
Not using the format will make the model perform significantly worse than intended.
# Quantize Config
Rename `quantize_config_Xg.json` where X is the groupsize to `quantize_config.json` for the version you pick.
## Other Versions
[Original FP16 Model](https://huggingface.co/elinas/chronos-13b-v2)
[GGML Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML)
**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
|
tingchih/pretrain_sent_concat | tingchih | 2023-08-10T20:22:17Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-10T01:15:41Z | the example result in the records |
monideep2255/batch_size_8_50_epochs_base_model | monideep2255 | 2023-08-10T20:03:31Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-10T18:02:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: batch_size_8_50_epochs_base_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch_size_8_50_epochs_base_model
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.5872 | 6.67 | 200 | 3.6529 |
| 3.8231 | 13.33 | 400 | 3.7135 |
| 3.7257 | 20.0 | 600 | 3.7110 |
| 3.7043 | 26.67 | 800 | 3.6998 |
| 3.6979 | 33.33 | 1000 | 3.6782 |
| 3.6876 | 40.0 | 1200 | 3.6811 |
| 3.6897 | 46.67 | 1400 | 3.6780 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
julianty/opus-tatoeba-en-ja-finetuned-eng-to-jpn_Hani | julianty | 2023-08-10T20:00:22Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:tatoeba_mt",
"base_model:Helsinki-NLP/opus-tatoeba-en-ja",
"base_model:finetune:Helsinki-NLP/opus-tatoeba-en-ja",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-10T19:45:37Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-tatoeba-en-ja
tags:
- generated_from_trainer
datasets:
- tatoeba_mt
metrics:
- bleu
model-index:
- name: opus-tatoeba-en-ja-finetuned-eng-to-jpn_Hani
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: tatoeba_mt
type: tatoeba_mt
config: eng-jpn_Hani
split: test
args: eng-jpn_Hani
metrics:
- name: Bleu
type: bleu
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-tatoeba-en-ja-finetuned-eng-to-jpn_Hani
This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on the tatoeba_mt dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4742
- Bleu: 0.0
- Gen Len: 18.9426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 6.4645 | 1.0 | 1244 | 6.4742 | 0.0 | 18.9426 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
shekmanchoy/medical_adapter_parallel | shekmanchoy | 2023-08-10T19:58:34Z | 0 | 0 | null | [
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-08-10T19:54:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: parallel_medical_adapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# parallel_medical_adapter
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Dulence/speecht5_tts_voxpopuli_hr | Dulence | 2023-08-10T19:57:29Z | 85 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"dusan",
"generated_from_trainer",
"hr",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-08-10T19:56:21Z | ---
language:
- hr
license: mit
base_model: microsoft/speecht5_tts
tags:
- dusan
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Hrvatski
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Hrvatski
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli hr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4915 | 3.24 | 1000 | 0.4504 |
| 0.4757 | 6.49 | 2000 | 0.4366 |
| 0.4653 | 9.73 | 3000 | 0.4318 |
| 0.4636 | 12.98 | 4000 | 0.4304 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nesanchezo/model_handwritenNumbers-nesanchezo | nesanchezo | 2023-08-10T19:53:48Z | 243 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-07T14:49:59Z | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: model_handwritenNumbers-nesanchezo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_handwritenNumbers-nesanchezo
This model is a fine-tuned version of [farleyknight-org-username/vit-base-mnist](https://huggingface.co/farleyknight-org-username/vit-base-mnist) on the handwriten-Numbers dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0807
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.396 | 0.34 | 500 | 0.1925 | 0.9470 |
| 0.2672 | 0.67 | 1000 | 0.2655 | 0.9297 |
| 0.2261 | 1.01 | 1500 | 0.1767 | 0.9548 |
| 0.1603 | 1.34 | 2000 | 0.1423 | 0.9658 |
| 0.1308 | 1.68 | 2500 | 0.1378 | 0.9709 |
| 0.1187 | 2.02 | 3000 | 0.1168 | 0.9737 |
| 0.0873 | 2.35 | 3500 | 0.0857 | 0.9823 |
| 0.0686 | 2.69 | 4000 | 0.1188 | 0.9753 |
| 0.0635 | 3.03 | 4500 | 0.0836 | 0.9804 |
| 0.034 | 3.36 | 5000 | 0.0807 | 0.9839 |
| 0.0155 | 3.7 | 5500 | 0.0898 | 0.9823 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ailabturkiye/GoogleAsistan | ailabturkiye | 2023-08-10T19:11:10Z | 0 | 0 | null | [
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-10T19:05:12Z | ---
license: openrail
language:
- tr
tags:
- music
---
Google çeviri kullanılarak oluşturulan sesler ile yapılan ses modeli. Train ve Dataset bana aittir. |
samlikesphysics/llm-rsa-mi | samlikesphysics | 2023-08-10T18:46:13Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2023-08-10T18:37:47Z | ---
license: mit
---
language:
- en
thumbnail:
tags:
- tag1
- tag2
datasets:
- dataset1
- dataset2
metrics:
- metric1
- metric2 |
EdJ1234/lora-peft-legal-summ-v1 | EdJ1234 | 2023-08-10T18:44:22Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:40:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
Francesco-A/code-search-net-tokenizer | Francesco-A | 2023-08-10T18:41:13Z | 0 | 1 | null | [
"code tokenizer",
"python tokenizer",
"GPT-2",
"code",
"dataset:code_search_net",
"license:apache-2.0",
"region:us"
] | null | 2023-07-22T17:22:55Z | ---
license: apache-2.0
datasets:
- code_search_net
language:
- code
tags:
- code tokenizer
- python tokenizer
- GPT-2
---
**Model Card: (TEST) code-search-net-tokenizer**
**Model Description:**
The Code Search Net Tokenizer is a custom tokenizer specifically trained for tokenizing Python code snippets. It has been trained on a large corpus of Python code snippets from the CodeSearchNet dataset using the GPT-2 model as a starting point. The goal of this tokenizer is to effectively tokenize Python code for use in various natural language processing and code-related tasks.
**Model Details:**
- Name: Code Search Net Tokenizer
- Model Type: Custom Tokenizer
- Language: Python
**Training Data:**
The tokenizer was trained on a corpus of Python code snippets from the CodeSearchNet dataset. The dataset consists of various Python code examples collected from open-source repositories on GitHub. The tokenizer has been fine-tuned on this dataset to create a specialized vocabulary that captures the unique syntax and structure of Python code.
**Tokenizer Features:**
- The Code Search Net Tokenizer offers the following features:
- Tokenization of Python code: The tokenizer can effectively split Python code snippets into individual tokens, making it suitable for downstream tasks that involve code processing and understanding.
**Usage:**
You can use the `code-search-net-tokenizer` to preprocess code snippets and convert them into numerical representations suitable for feeding into language models.
**Limitations:**
The `code-search-net-tokenizer` is specifically tailored to code-related text data and may not be suitable for general text tasks. It may not perform optimally for natural language text outside the programming context. |
EulerianKnight/taxi-v3-1 | EulerianKnight | 2023-08-10T18:29:31Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T18:29:29Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="EulerianKnight/taxi-v3-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
EulerianKnight/q-FrozenLake-v1-4x4-noSlippery | EulerianKnight | 2023-08-10T18:27:29Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T18:27:27Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="EulerianKnight/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
KingKazma/cnn_dailymail_t5-small_prefix_tuning_500_10_3000_8_e-1_s6789_v3_l6_v20_manual | KingKazma | 2023-08-10T18:25:17Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:25:15Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e9_s6789_v3_l6_r4 | KingKazma | 2023-08-10T18:24:41Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:24:39Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
vrajur/Reinforce-Pixelcopter-PLE-v0 | vrajur | 2023-08-10T18:21:42Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T18:21:38Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.20 +/- 16.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
carlos-rodcor/ppo-LunarLander-v2 | carlos-rodcor | 2023-08-10T18:19:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T18:18:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.75 +/- 16.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e9_s6789_v3_l4_r4 | KingKazma | 2023-08-10T18:15:55Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:15:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s6789_v3_l6_r4 | KingKazma | 2023-08-10T18:10:32Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:10:29Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e8_s6789_v3_l4_r4 | KingKazma | 2023-08-10T18:08:56Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:08:52Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
llama-anon/petra-13b-instruct | llama-anon | 2023-08-10T18:05:32Z | 10 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-09T23:06:54Z | ---
license: agpl-3.0
---
LLaMA-13B merged with Instruct-13B weights, just werks.
Prompt format:
```
user instruction here
optional additional user input
generated output
```
Example prompt:
```
Does this tweet have negative or positive sentiment?
i hate my life!!!!
negative
```
Feel free to donate:
XMR: ```86Z8nLSVPx3SZ5z7iWugeK5JruAeGPUJyExD9e3wdTSxUvFMhGXNG9ucPqCm8M29y1AxP6ta56GBQ4GiEUMzeew9MfX1yct``` |
jfrojanoj/dqn-SpaceInvadersNoFrameskip-v4 | jfrojanoj | 2023-08-10T18:05:31Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-10T18:04:54Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 733.00 +/- 217.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jfrojanoj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jfrojanoj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jfrojanoj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e7_s6789_v3_l4_r4 | KingKazma | 2023-08-10T18:01:58Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T18:01:54Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
smangrul/peft-lora-starcoder15B-personal-copilot-A100-40GB-colab | smangrul | 2023-08-10T18:00:28Z | 5 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:adapter:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-09T20:22:03Z | ---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: peft-lora-starcoder15B-personal-copilot-A100-40GB-colab
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoder15B-personal-copilot-A100-40GB-colab
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6593 | 0.05 | 100 | 0.5847 |
| 0.6226 | 0.1 | 200 | 0.5292 |
| 0.6597 | 0.15 | 300 | 0.4814 |
| 0.5523 | 0.2 | 400 | 0.4617 |
| 0.4856 | 0.25 | 500 | 0.4597 |
| 0.5237 | 0.3 | 600 | 0.4505 |
| 0.4894 | 0.35 | 700 | 0.4398 |
| 0.5579 | 0.4 | 800 | 0.4377 |
| 0.4702 | 0.45 | 900 | 0.4322 |
| 0.5418 | 0.5 | 1000 | 0.4244 |
| 0.5159 | 0.55 | 1100 | 0.4133 |
| 0.524 | 0.6 | 1200 | 0.3977 |
| 0.4138 | 0.65 | 1300 | 0.3966 |
| 0.5572 | 0.7 | 1400 | 0.3936 |
| 0.4146 | 0.75 | 1500 | 0.3904 |
| 0.7927 | 0.8 | 1600 | 0.3905 |
| 0.4131 | 0.85 | 1700 | 0.3866 |
| 0.4552 | 0.9 | 1800 | 0.3881 |
| 0.3914 | 0.95 | 1900 | 0.3794 |
| 0.4945 | 1.0 | 2000 | 0.3633 |
### Framework versions
- PEFT 0.5.0.dev0
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e5_s6789_v3_l6_r4 | KingKazma | 2023-08-10T17:56:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:56:19Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
moisesrobles04/SpaceInvader-v4 | moisesrobles04 | 2023-08-10T17:55:44Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-19T17:49:36Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga moisesrobles04 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga moisesrobles04 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga moisesrobles04
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 102000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.2),
('frame_stack', 5),
('gradient_steps', 1),
('learning_rate', 0.01),
('learning_starts', 100000),
('n_timesteps', 950000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 5),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e6_s6789_v3_l4_r4 | KingKazma | 2023-08-10T17:55:00Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:54:56Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
edumunozsala/bertin_base_sentiment_analysis_es | edumunozsala | 2023-08-10T17:50:32Z | 131 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"sagemaker",
"bertin",
"TextClassification",
"SentimentAnalysis",
"es",
"dataset:IMDbreviews_es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-05-15T16:40:29Z | ---
language: es
tags:
- sagemaker
- bertin
- TextClassification
- SentimentAnalysis
license: apache-2.0
datasets:
- IMDbreviews_es
metrics:
- accuracy
model-index:
- name: bertin_base_sentiment_analysis_es
results:
- task:
name: Sentiment Analysis
type: sentiment-analysis
dataset:
name: "IMDb Reviews in Spanish"
type: IMDbreviews_es
metrics:
- name: Accuracy
type: accuracy
value: 0.898933
- name: F1 Score
type: f1
value: 0.8989063
- name: Precision
type: precision
value: 0.8771473
- name: Recall
type: recall
value: 0.9217724
widget:
- text: "Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
---
# Model bertin_base_sentiment_analysis_es
## **A finetuned model for Sentiment analysis in Spanish**
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container,
The base model is **Bertin base** which is a RoBERTa-base model pre-trained on the Spanish portion of mC4 using Flax.
It was trained by the Bertin Project.[Link to base model](https://huggingface.co/bertin-project/bertin-roberta-base-spanish)
Article: BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling
- Author = Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury,
- journal = Procesamiento del Lenguaje Natural,
- volume = 68, number = 0, year = 2022
- url = http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403
## Dataset
The dataset is a collection of movie reviews in Spanish, about 50,000 reviews. The dataset is balanced and provides every review in english, in spanish and the label in both languages.
Sizes of datasets:
- Train dataset: 42,500
- Validation dataset: 3,750
- Test dataset: 3,750
## Intended uses & limitations
This model is intented for Sentiment Analysis for spanish corpus and finetuned specially for movie reviews but it can be applied to other kind of reviews.
## Hyperparameters
{
"epochs": "4",
"train_batch_size": "32",
"eval_batch_size": "8",
"fp16": "true",
"learning_rate": "3e-05",
"model_name": "\"bertin-project/bertin-roberta-base-spanish\"",
"sagemaker_container_log_level": "20",
"sagemaker_program": "\"train.py\"",
}
## Evaluation results
- Accuracy = 0.8989333333333334
- F1 Score = 0.8989063750333421
- Precision = 0.877147319104633
- Recall = 0.9217724288840262
## Test results
## Model in action
### Usage for Sentiment Analysis
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("edumunozsala/bertin_base_sentiment_analysis_es")
model = AutoModelForSequenceClassification.from_pretrained("edumunozsala/bertin_base_sentiment_analysis_es")
text ="Se trata de una película interesante, con un solido argumento y un gran interpretación de su actor principal"
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
output = outputs.logits.argmax(1)
```
Created by [Eduardo Muñoz/@edumunozsala](https://github.com/edumunozsala)
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s6789_v3_l6_r4 | KingKazma | 2023-08-10T17:49:16Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:49:14Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Veucci/lyric-to-genre | Veucci | 2023-08-10T17:43:04Z | 176 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"music",
"en",
"dataset:Veucci/lyric-to-3genre",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-02T13:32:50Z | ---
license: cc-by-nc-4.0
datasets:
- Veucci/lyric-to-3genre
language:
- en
library_name: transformers
tags:
- music
widget:
- text: >-
When I'm away from you, I'm happier than ever Wish I could explain it better
I wish it wasn't true Give me a day or two to think of something clever To
write myself a letter To tell me what to do, mm-mmm Do you read my
interviews? Or do you skip my avenue? (My avenue) When you (when you) said
you were passing through Was I even on your way? I knew when I asked you to
(when I asked you to) Be cool about what I was telling you You'd do the
opposite of what you said you'd do (what you said you'd do) And I'd end up
more afraid Don't say it isn't fair You clearly weren't aware that you made
me miserable So if you really wanna know
example_title: (Pop) Happier Than Ever - Billie Eilish
- text: >-
Look, I was gonna go easy on you and not to hurt your feelings But I'm only
going to get this one chance (six minutes, six minutes) Something's wrong, I
can feel it (six minutes, six minutes, Slim Shady, you're on) Just a feeling
I've got, like something's about to happen, but I don't know what If that
means what I think it means, we're in trouble, big trouble And if he is as
bananas as you say, I'm not taking any chances You are just what the doctor
ordered I'm beginning to feel like a Rap God, Rap God All my people from the
front to the back nod, back nod Now who thinks their arms are long enough to
slap box, slap box? They said I rap like a robot, so call me Rapbot
example_title: (Hip-Hop) Rap God - Eminem
- text: >-
Come as you are, as you were As I want you to be As a friend, as a friend As
an old enemy Take your time, hurry up Choice is yours, don't be late Take a
rest as a friend As an old Memoria, memoria Memoria, memoria Come doused in
mud, soaked in bleach As I want you to be As a trend, as a friend As an old
Memoria, memoria Memoria, memoria
example_title: (Rock) Come as You Are - Nirvana
---
# Lyrics Genre Classification Model
## Description
The model was trained using the BERT language model on my [song lyrics dataset](https://huggingface.co/datasets/Veucci/lyrics_3genre) to predict the genre of a given song based on its lyrics. This repository houses the machine learning model, which is capable of making predictions in three distinct genres: Pop, Rock, and Hip-Hop.
For training and test codes check out [Github page](https://github.com/Veucci/lyrics-to-genre-lite).
## Dataset
The model was trained on a diverse and labeled dataset of song lyrics, which contained approximately 3000 rows. The dataset was carefully curated to include songs from a wide range of artists and genres, ensuring a comprehensive representation of Pop, Rock, and Hip-Hop music.
[DATASET](https://huggingface.co/datasets/Veucci/lyrics_3genre)
## Quick Start
```py
from transformers import pipeline
classifier = pipeline("text-classification", model="Veucci/lyrics-to-genre")
result = classifier("When I'm away from you, I'm happier than ever Wish I could explain it better I wish it wasn't true")
print(result)
```
## License
This dataset is released under the Creative Commons Attribution-NonCommercial license. This means that you are not allowed to use the dataset for commercial purposes. For detailed information about the license, please refer to the [LICENSE](./LICENSE) file.
## Contact
If you have any questions, suggestions, or concerns regarding this dataset, please feel free to reach out to email at [[email protected]](mailto:[email protected]).
I hope this model helps in your genre classification tasks and inspires further exploration of song lyrics analysis! |
KingKazma/xsum_gpt2_lora_500_10_3000_8_e4_s6789_v3_l4_r4 | KingKazma | 2023-08-10T17:41:03Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:40:59Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Veucci/turkish-lyric-to-genre | Veucci | 2023-08-10T17:39:19Z | 130 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"music",
"tr",
"dataset:Veucci/turkish-lyric-to-genre",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-05T19:32:54Z | ---
license: cc-by-nc-4.0
datasets:
- Veucci/turkish-lyric-to-genre
language:
- tr
library_name: transformers
tags:
- music
widget:
- text: Çaldığın o kalbi yerine koy lütfen Eğer hislerinden pek emin değilsen
Aradığın aşksa en güzelinden O zaman başka Açarım kapıları hazırım dünden
Çaldığın o kalbi yerine koy lütfen Eğer hislerinden pek emin değilsen
Aradığın aşksa en güzelinden O zaman başka Açarım kapıları hazırım dünden
example_title: (Pop) O Sen Olsan Bari - Aleyna Tilki
- text: Nefes alamam, boğazıma kadar dolu Yük dolu, kül dolu iç yarası Bu kez
yaramaz, ilaçlar ama Sonun ölüm dostum o yüzden iç yarasın Nefes alamam,
boğazıma kadar dolu Yük dolu, kül dolu iç yarası Bu kez yaramaz, ilaçlar
ama Sonun ölüm dostum o yüzden iç yarasın Karanlık ev, bu sokak, bütün
gezegen Ateş yak, ateş yak eskimiş seneler Zaman ilaç dediler ne gelir
elimden? Işıksızım bir şık seçtim inanıp derinden Umrumda mı sandın dünya
oynasın yerinden Kıyamet kopsun severim erinmem Ölümden değil korkum
gittiğimde yenilmen O gün cevap bulursun öpmek yarayı geçirmez Buraya
kadar, kanayamam artık Yapıştı yakama, buraya kadar Kaçamam asla yine
yakalar Son nefesini ver, buraya kadar
example_title: (Hip-Hop) Nefes Alamam - Aspova
- text: Bedava yaşıyoruz, dostlar bedava Hava bedava, bulut bedava Dere tepe
bedava, yağmur çamur bedava Bedava yaşıyoruz, dostlar bedava Hava bedava,
bulut bedava Dere tepe bedava, yağmur çamur bedava Otomobillerin dışı,
sinemaların kapısı Otomobillerin dışı, sinemaların kapısı Camekanlar,
onlar bedava Camekanlar, onlar bedava Peynir ekmek değil ama acı su bedava
Kelle fiyatına hürriyet, esirlik bedava Peynir ekmek değil ama acı su
bedava Kelle fiyatına hürriyet, esirlik bedava Bedava yaşıyoruz, dostlar
bedava
example_title: (Rock) Bedava Yaşıyoruz - Cem Karaca
- text: Nikâhına beni çağır, sevgilim İstersen şahidin olurum senin Bu adam kim?
diye soran olursa Eski bir tanıdık dersin, sevgilim Nikâhına beni çağır
sevgilim İstersen şahidin olurum senin Bu adam kim diye soran olursa Eski
bir tanıdık dersin, sevgilim Hayaller kurardık biz yıllar önce Hiç yoktu
hesapta ayrılık bizce Bilirsin ne kadar görmek isterdim Beyazlar içinde
seni öylece Hayaller kurardık biz yıllar önce Hiç yoktu hesapta ayrılık
bizce Bilirsin ne kadar görmek isterdim Beyazlar içinde seni öylece
Garibin biriysem sevemez miyim? Aşkla karın doymaz diyen ben miyim? Şimdi
çok zenginsin, ben ayrı garip Sana bir buket gül veremez miyim?
example_title: (Arabesk) Nikah Masası - Ümit Besen
---
# Lyrics Genre Classification Model
## Description
The model was trained using the BERT language model on my [song lyrics dataset](https://huggingface.co/datasets/Veucci/turkish-lyric-to-genre) to predict the genre of a given song based on its lyrics. This repository houses the machine learning model, which is capable of making predictions in four distinct genres: Pop, Rock, Hip-Hop and Arabesk.
For training and test codes check out [Github page](https://github.com/Veucci/turkish-lyric-to-genre).
## Dataset
The model was trained on a diverse and labeled dataset of song lyrics, which contained 3172 rows. The dataset was carefully curated to include songs from a wide range of artists and genres, ensuring a comprehensive representation of Pop, Rock, Hip-Hop and Arabesk music.
[DATASET](https://huggingface.co/datasets/Veucci/turkish-lyric-to-genre)
## Quick Start
```py
from transformers import pipeline
classifier = pipeline("text-classification", model="Veucci/lyrics-to-genre")
result = classifier("Bedava yaşıyoruz, dostlar bedava Hava bedava, bulut bedava Dere tepe bedava, yağmur çamur bedava")
print(result)
```
## License
This dataset is released under the Creative Commons Attribution-NonCommercial license. This means that you are not allowed to use the dataset for commercial purposes. For detailed information about the license, please refer to the [LICENSE](./LICENSE) file.
## Contact
If you have any questions, suggestions, or concerns regarding this dataset, please feel free to reach out to email at [[email protected]](mailto:[email protected]).
I hope this model helps in your genre classification tasks and inspires further exploration of song lyrics analysis!
|
imvladikon/hebert_parashoot | imvladikon | 2023-08-10T17:39:15Z | 142 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"he",
"dataset:imvladikon/parashoot",
"arxiv:2109.11314",
"base_model:avichr/heBERT",
"base_model:finetune:avichr/heBERT",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-02T07:10:50Z | ---
base_model: avichr/heBERT
tags:
- generated_from_trainer
datasets:
- imvladikon/parashoot
model-index:
- name: hebert_parashoot
results: []
language:
- he
metrics:
- f1
- exact_match
pipeline_tag: question-answering
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hebert_parashoot
This model is a fine-tuned version of [avichr/heBERT](https://huggingface.co/avichr/heBERT) on the [imvladikon/parashoot](https://huggingface.co/datasets/imvladikon/parashoot) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
results:
```
{
"epoch": 5.0,
"eval_exact_match": 18.099547511312217,
"eval_f1": 36.8601893452485,
"eval_runtime": 6.7527,
"eval_samples": 249,
"eval_samples_per_second": 36.874,
"eval_steps_per_second": 4.739
}
```
(which reflects with results from the https://arxiv.org/pdf/2109.11314.pdf : F1: 36.7, EM: 18.2)
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3 |
LarryAIDraw/miko-09 | LarryAIDraw | 2023-08-10T17:37:02Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T17:28:57Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/125385/miko-yotsuya-oror-mieruko-chan |
LarryAIDraw/ako | LarryAIDraw | 2023-08-10T17:36:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T17:27:57Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/16506/amau-akoblue-archive |
KingKazma/xsum_gpt2_lora_500_10_3000_8_e2_s6789_v3_l6_r4 | KingKazma | 2023-08-10T17:35:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:35:02Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e3_s6789_v3_l4_r4 | KingKazma | 2023-08-10T17:34:05Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:34:01Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
cecb/bitcoin-tweets-sentiment-llama2model | cecb | 2023-08-10T17:31:08Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T14:25:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e2_s6789_v3_l4_r4 | KingKazma | 2023-08-10T17:27:07Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:27:03Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
justarandom8/amazon_sentiment_model | justarandom8 | 2023-08-10T17:24:08Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2023-08-10T17:24:05Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
royam0820/llama2-CodeInstr-ft | royam0820 | 2023-08-10T17:23:40Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:23:31Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s6789_v3_l6_r4 | KingKazma | 2023-08-10T17:20:29Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:20:27Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e1_s6789_v3_l4_r4 | KingKazma | 2023-08-10T17:20:08Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:20:04Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e-1_s6789_v3_l6_r4 | KingKazma | 2023-08-10T17:13:13Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:13:11Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_lora_500_10_3000_8_e0_s6789_v3_l4_r4 | KingKazma | 2023-08-10T17:13:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:13:07Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s6789_v3_l6_v100_manual | KingKazma | 2023-08-10T17:07:31Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:07:29Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Against61/llama2-qlora-finetunined-SFU2 | Against61 | 2023-08-10T17:00:09Z | 3 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T17:00:01Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Shreekant162/FineTuned | Shreekant162 | 2023-08-10T16:45:12Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-10T16:45:01Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: FineTuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# FineTuned
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s6789_v3_l6_v100_manual | KingKazma | 2023-08-10T16:45:02Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:45:00Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3_l5_v100 | KingKazma | 2023-08-10T16:41:29Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-10T16:41:26Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Subsets and Splits