modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
timjwhite/whisper-tiny-dv | timjwhite | 2023-08-05T23:43:44Z | 86 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-05T11:31:30Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[-19%:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3484562066792691
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-dv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7263
- Wer Ortho: 0.3483
- Wer: 0.3485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0008 | 17.24 | 500 | 0.6662 | 0.3483 | 0.3491 |
| 0.0002 | 34.48 | 1000 | 0.7263 | 0.3483 | 0.3485 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
ld76/speecht5_finetuned_voxpopuli_nl | ld76 | 2023-08-05T23:41:04Z | 82 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | 2023-08-05T20:33:36Z | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5074 | 18.1 | 1000 | 0.4658 |
| 0.4824 | 36.2 | 2000 | 0.4533 |
| 0.4766 | 54.3 | 3000 | 0.4530 |
| 0.4745 | 72.4 | 4000 | 0.4509 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
sandeep12345/new_biofilm_LLM | sandeep12345 | 2023-08-05T23:39:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T23:38:30Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
sandeep1chataut/biofilm_custom_llama_finetune | sandeep1chataut | 2023-08-05T23:25:17Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T23:24:39Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Za88yes/Ri5 | Za88yes | 2023-08-05T23:18:05Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T18:10:21Z | ---
license: creativeml-openrail-m
---
|
BreadAi/MuseCan-1-2 | BreadAi | 2023-08-05T22:38:58Z | 211 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"dataset:breadlicker45/musenet-encoders-12k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-21T10:06:37Z | ---
datasets:
- breadlicker45/musenet-encoders-12k
--- |
CyberHarem/privaty_nikke | CyberHarem | 2023-08-05T22:38:56Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/privaty_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T22:35:19Z | ---
license: mit
datasets:
- CyberHarem/privaty_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of privaty_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/privaty_nikke.pt` as the embedding and `1500/privaty_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `privaty_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/privaty_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/privaty_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/privaty_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/privaty_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/privaty_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/privaty_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/privaty_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/privaty_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/privaty_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/privaty_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/privaty_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/privaty_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/privaty_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/privaty_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/privaty_nikke.zip) |
|
eliorcohen/ppo-Huggy | eliorcohen | 2023-08-05T22:22:03Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-08-05T22:21:59Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: eliorcohen/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Eilliar/llama-2-7b-test | Eilliar | 2023-08-05T22:20:51Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-04T14:41:55Z | This is a test, the model was fine tuned on Colab using the [mlabonne/guanaco-llama2-1k](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) dataset.
I'm just curious to learn the fine tune and upload process. |
helamri/dqn-SpaceInvadersNoFrameskip-v4 | helamri | 2023-08-05T22:15:19Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T22:14:43Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 622.50 +/- 134.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga helamri -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga helamri -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga helamri
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run2 | salohnana2018 | 2023-08-05T22:07:34Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"pytorch",
"tensorboard",
"bert",
"adapterhub:Arabic ABSA/SemEvalHotelReview",
"dataset:Hotel",
"region:us"
] | null | 2023-08-05T21:24:15Z | ---
tags:
- adapterhub:Arabic ABSA/SemEvalHotelReview
- adapter-transformers
- bert
datasets:
- Hotel
---
# Adapter `salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run2` for CAMeL-Lab/bert-base-arabic-camelbert-msa
An [adapter](https://adapterhub.ml) for the `CAMeL-Lab/bert-base-arabic-camelbert-msa` model that was trained on the [Arabic ABSA/SemEvalHotelReview](https://adapterhub.ml/explore/Arabic ABSA/SemEvalHotelReview/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("CAMeL-Lab/bert-base-arabic-camelbert-msa")
adapter_name = model.load_adapter("salohnana2018/ABSA-SentencePair-corrected-domainAdapt-Stack-HARD50-Adapter-pfeiffer-run2", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
marc-bordessoule/llama2-qlora-finetunined-french | marc-bordessoule | 2023-08-05T21:49:46Z | 2 | 0 | peft | [
"peft",
"text-generation",
"region:us"
] | text-generation | 2023-07-31T07:28:37Z | ---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0 |
CyberHarem/noir_nikke | CyberHarem | 2023-08-05T21:36:48Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/noir_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T21:33:15Z | ---
license: mit
datasets:
- CyberHarem/noir_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of noir_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/noir_nikke.pt` as the embedding and `1500/noir_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `noir_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/noir_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/noir_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/noir_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/noir_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/noir_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/noir_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/noir_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/noir_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/noir_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/noir_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/noir_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/noir_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/noir_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/noir_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/noir_nikke.zip) |
|
s3nh/Llama2-Chinese-13b-Chat-GGML | s3nh | 2023-08-05T21:26:45Z | 0 | 8 | transformers | [
"transformers",
"text-generation",
"zh",
"license:openrail",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-05T20:30:45Z | ---
license: openrail
language:
- zh
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card
# Llama2中文社区
---
## Llama2中文微调参数
由于Llama2本身的中文对齐较弱,我们采用中文指令集,对meta-llama/Llama-2-13b-chat-hf进行LoRA微调,使其具备较强的中文对话能力。
🎯 **该版本为LoRA中文微调参数FlagAlpha/Llama2-Chinese-13b-Chat-LoRA和meta-llama/Llama-2-13b-chat-hf参数结合后的版本,可直接使用**
---
## 🚀 社区地址:
Github:[**Llama2-Chinese**](https://github.com/FlagAlpha/Llama2-Chinese)
在线体验链接:[**llama.family**](https://llama.family/)
## 🔥 社区介绍
欢迎来到Llama2中文社区!
我们是一个专注于Llama2模型在中文方面的优化和上层建设的高级技术社区。
**基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级**。
我们热忱欢迎对大模型LLM充满热情的开发者和研究者加入我们的行列。
## 🐼 社区资源
- Llama2在线体验链接[**llama.family**](https://llama.family/),同时包含Meta原版和中文微调版本!
- Llama2 Chat模型的[中文问答能力评测](https://github.com/FlagAlpha/Llama2-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)!
- [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建! |
shubhamagarwal92/ppo-PyramidsTraining | shubhamagarwal92 | 2023-08-05T21:10:49Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-08-05T21:10:43Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: shubhamagarwal92/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Henk717/spring-dragon-qlora | Henk717 | 2023-08-05T21:06:52Z | 6 | 7 | peft | [
"peft",
"tensorboard",
"region:us"
] | null | 2023-08-05T20:59:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Patsflynn/ppo-lunar-lander | Patsflynn | 2023-08-05T21:06:45Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T21:06:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.90 +/- 21.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
arindamatcalgm/w266_model3_BERT_CNN | arindamatcalgm | 2023-08-05T21:06:19Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-03T03:06:37Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: w266_model3_BERT_CNN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w266_model3_BERT_CNN
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7935
- Accuracy: {'accuracy': 0.67}
- F1: {'f1': 0.6539863523155215}
- Precision: {'precision': 0.6655888523241464}
- Recall: {'recall': 0.67}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:--------------------------:|:---------------------------------:|:-----------------:|
| 0.7881 | 1.0 | 1923 | 0.8177 | {'accuracy': 0.638} | {'f1': 0.6219209356584174} | {'precision': 0.6325213408748697} | {'recall': 0.638} |
| 0.649 | 2.0 | 3846 | 0.8257 | {'accuracy': 0.669} | {'f1': 0.6701535233107099} | {'precision': 0.672307962349643} | {'recall': 0.669} |
| 0.4771 | 3.0 | 5769 | 0.8922 | {'accuracy': 0.676} | {'f1': 0.6778795418743319} | {'precision': 0.6805694646691987} | {'recall': 0.676} |
| 0.3403 | 4.0 | 7692 | 1.4285 | {'accuracy': 0.669} | {'f1': 0.666176554548987} | {'precision': 0.6653390405441227} | {'recall': 0.669} |
| 0.2088 | 5.0 | 9615 | 1.7417 | {'accuracy': 0.67} | {'f1': 0.6716636513157895} | {'precision': 0.6752339933799478} | {'recall': 0.67} |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
patonw/Reinforce-Pixelcopter-PLE-v0 | patonw | 2023-08-05T21:03:29Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T01:33:19Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 106.80 +/- 100.67
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
arhamk/q-Taxi-v3 | arhamk | 2023-08-05T21:03:17Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-02T19:56:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arhamk/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
polejowska/detr-r50-cd45rb-8ah-6l-dilation-corrected | polejowska | 2023-08-05T21:01:51Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2023-08-04T07:11:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-6l-dilation-corrected
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-6l-dilation-corrected
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.4024 | 1.0 | 4606 | 1.7613 |
| 2.1798 | 2.0 | 9212 | 1.7139 |
| 2.1158 | 3.0 | 13818 | 1.6784 |
| 2.0907 | 4.0 | 18424 | 1.6514 |
| 2.0665 | 5.0 | 23030 | 1.6573 |
| 2.0511 | 6.0 | 27636 | 1.6508 |
| 2.0401 | 7.0 | 32242 | 1.6145 |
| 2.0217 | 8.0 | 36848 | 1.6353 |
| 2.0119 | 9.0 | 41454 | 1.6176 |
| 1.9921 | 10.0 | 46060 | 1.6012 |
| 1.9841 | 11.0 | 50666 | 1.5832 |
| 1.9774 | 12.0 | 55272 | 1.6204 |
| 1.9567 | 13.0 | 59878 | 1.5836 |
| 1.9542 | 14.0 | 64484 | 1.5789 |
| 1.9347 | 15.0 | 69090 | 1.5565 |
| 1.9348 | 16.0 | 73696 | 1.5833 |
| 1.9188 | 17.0 | 78302 | 1.5547 |
| 1.9085 | 18.0 | 82908 | 1.5456 |
| 1.8956 | 19.0 | 87514 | 1.5433 |
| 1.8891 | 20.0 | 92120 | 1.5555 |
| 1.8899 | 21.0 | 96726 | 1.5278 |
| 1.8782 | 22.0 | 101332 | 1.5235 |
| 1.8676 | 23.0 | 105938 | 1.5314 |
| 1.8699 | 24.0 | 110544 | 1.5172 |
| 1.8627 | 25.0 | 115150 | 1.5202 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CyberHarem/neon_nikke | CyberHarem | 2023-08-05T20:57:49Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/neon_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T20:54:35Z | ---
license: mit
datasets:
- CyberHarem/neon_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of neon_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/neon_nikke.pt` as the embedding and `1500/neon_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `neon_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/neon_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/neon_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/neon_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/neon_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/neon_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/neon_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/neon_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/neon_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/neon_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/neon_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/neon_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/neon_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/neon_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/neon_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/neon_nikke.zip) |
|
ClementXie/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan-finetuned-gtzan | ClementXie | 2023-08-05T20:40:23Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ptah23/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan",
"base_model:finetune:ptah23/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-08-05T20:10:40Z | ---
license: bsd-3-clause
base_model: ptah23/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [ptah23/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan](https://huggingface.co/ptah23/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7839
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0007 | 1.0 | 112 | 0.7015 | 0.82 |
| 0.063 | 2.0 | 225 | 0.7797 | 0.82 |
| 0.1259 | 3.0 | 337 | 1.1225 | 0.83 |
| 0.0003 | 4.0 | 450 | 0.5694 | 0.89 |
| 0.0016 | 5.0 | 562 | 0.7449 | 0.89 |
| 0.0 | 6.0 | 675 | 0.9446 | 0.89 |
| 0.0 | 7.0 | 787 | 0.8780 | 0.88 |
| 0.0 | 8.0 | 900 | 0.7953 | 0.89 |
| 0.0988 | 9.0 | 1012 | 0.7962 | 0.9 |
| 0.0 | 9.96 | 1120 | 0.7839 | 0.9 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.13.1
- Datasets 2.14.3
- Tokenizers 0.13.2
|
CyberHarem/scarlet_nikke | CyberHarem | 2023-08-05T20:18:23Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/scarlet_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T20:15:21Z | ---
license: mit
datasets:
- CyberHarem/scarlet_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of scarlet_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/scarlet_nikke.pt` as the embedding and `1500/scarlet_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `scarlet_nikke`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/scarlet_nikke.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/scarlet_nikke.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/scarlet_nikke.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/scarlet_nikke.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/scarlet_nikke.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/scarlet_nikke.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/scarlet_nikke.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/scarlet_nikke.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/scarlet_nikke.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/scarlet_nikke.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/scarlet_nikke.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/scarlet_nikke.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/scarlet_nikke.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/scarlet_nikke.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/scarlet_nikke.zip) |
|
Ussen/whisper-medium-swc-drc-kat-1 | Ussen | 2023-08-05T19:56:35Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:Ussen/swc-drc-kat",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-05T16:32:32Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- Ussen/swc-drc-kat
metrics:
- wer
model-index:
- name: whisper-medium-swc-drc-kat-1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Ussen/swc-drc-kat
type: Ussen/swc-drc-kat
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.49379203310915676
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-swc-drc-kat-1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Ussen/swc-drc-kat dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9701
- Wer Ortho: 50.0388
- Wer: 0.4938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.6769 | 2.96 | 1000 | 0.8341 | 51.2296 | 0.5072 |
| 0.365 | 5.93 | 2000 | 0.8083 | 49.3917 | 0.4876 |
| 0.165 | 8.89 | 3000 | 0.8806 | 51.3073 | 0.5067 |
| 0.059 | 11.85 | 4000 | 0.9701 | 50.0388 | 0.4938 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
o33iemars/Gpt | o33iemars | 2023-08-05T19:44:14Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-05T19:41:46Z | ---
license: bigscience-openrail-m
---
|
Surya-Teja-Menta/PPO-LunarLander-v2 | Surya-Teja-Menta | 2023-08-05T19:41:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T19:05:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MLPpolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.64 +/- 14.87
name: mean_reward
verified: false
---
# **MLPpolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MLPpolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/volume_nikke | CyberHarem | 2023-08-05T19:39:14Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/volume_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T19:35:13Z | ---
license: mit
datasets:
- CyberHarem/volume_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of volume_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/volume_nikke.pt` as the embedding and `1500/volume_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `volume_nikke`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:----------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/volume_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/volume_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/volume_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/volume_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/volume_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/volume_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/volume_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/volume_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/volume_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/volume_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/volume_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/volume_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/volume_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/volume_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/volume_nikke.zip) |
|
Indra99-01/food_semeval_bigscience_bloomz-560m_PROMPT_TUNING_CAUSAL_LM_v1_60.pt | Indra99-01 | 2023-08-05T19:38:38Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T19:38:35Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
kejolong/police | kejolong | 2023-08-05T19:35:46Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T19:30:51Z | ---
license: creativeml-openrail-m
---
|
arhamk/a2c-AntBulletEnv-v0 | arhamk | 2023-08-05T19:27:40Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T19:26:33Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 925.76 +/- 168.74
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tilyupo/t5-large-trivia-ca2q | tilyupo | 2023-08-05T19:10:40Z | 4 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-04T08:59:03Z | ---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_keras_callback
model-index:
- name: t5-large-trivia-v2-ca2q
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-large-trivia-v2-ca2q
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1879
- Validation Loss: 0.3243
- Epoch: 2
<pre>
{'eval_loss': 1.0877012014389038,
'eval_bleu': 21.018623207468856,
'eval_rouge1': 58.42,
'eval_rouge2': 35.27,
'eval_rougeL': 51.13,
'eval_rougeLsum': 51.15,
'eval_exact': 0.02536196676707803,
'eval_runtime': 346.7508,
'eval_samples_per_second': 29.678,
'eval_steps_per_second': 0.929}
</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4719 | 0.3053 | 0 |
| 0.2556 | 0.3032 | 1 |
| 0.1879 | 0.3243 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
tilyupo/t5-base-trivia-ca2q | tilyupo | 2023-08-05T18:45:13Z | 60 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-04T08:15:43Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_keras_callback
model-index:
- name: t5-base-trivia-v2-ca2q
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-base-trivia-v2-ca2q
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2541
- Validation Loss: 0.3480
- Epoch: 2
<pre>
{'eval_loss': 1.2103511095046997,
'eval_bleu': 19.63270019311908,
'eval_rouge1': 57.01,
'eval_rouge2': 33.76,
'eval_rougeL': 49.73,
'eval_rougeLsum': 49.74,
'eval_exact': 0.022446798173161014,
'eval_runtime': 224.6161,
'eval_samples_per_second': 45.816,
'eval_steps_per_second': 1.434}
</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5159 | 0.3420 | 0 |
| 0.3061 | 0.3373 | 1 |
| 0.2541 | 0.3480 | 2 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
VicBeltran/dqn-SpaceInvadersNoFrameskip-v4 | VicBeltran | 2023-08-05T18:44:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T18:41:04Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 332.50 +/- 92.99
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VicBeltran -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VicBeltran -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VicBeltran
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tilyupo/t5-small-trivia-ca2q | tilyupo | 2023-08-05T18:39:22Z | 59 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-04T07:19:19Z | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-trivia-v2-ca2q
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-trivia-v2-ca2q
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3095
- Validation Loss: 0.3903
- Epoch: 3
<pre>
{'eval_loss': 1.3911314010620117,
'eval_bleu': 17.919726187841192,
'eval_rouge1': 54.15,
'eval_rouge2': 31.12,
'eval_rougeL': 47.29,
'eval_rougeLsum': 47.32,
'eval_exact': 0.020600524730346906,
'eval_runtime': 104.3595,
'eval_samples_per_second': 98.611,
'eval_steps_per_second': 3.085}
</pre>
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adafactor', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_2_decay': -0.8, 'epsilon_1': 1e-30, 'epsilon_2': 0.001, 'clip_threshold': 1.0, 'relative_step': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6099 | 0.4054 | 0 |
| 0.3919 | 0.3899 | 1 |
| 0.3451 | 0.3880 | 2 |
| 0.3095 | 0.3903 | 3 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
konverner/due_retail_25 | konverner | 2023-08-05T18:36:06Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-04T14:36:24Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# konverner/due_retail_25
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("konverner/due_retail_25")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
openflamingo/OpenFlamingo-4B-vitl-rpj3b | openflamingo | 2023-08-05T18:28:05Z | 0 | 3 | null | [
"en",
"dataset:laion2b",
"arxiv:2308.01390",
"arxiv:2210.08402",
"arxiv:2304.06939",
"region:us"
] | null | 2023-06-13T21:22:22Z | ---
language: en
datasets:
- laion2b
---
# OpenFlamingo-4B (CLIP ViT-L/14, RedPajama-INCITE-Base-3B-v1)
[Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
This 4B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [RedPajama-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) language model.
## Model Details
We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402), [Multimodal C4](https://arxiv.org/abs/2304.06939), and custom ChatGPT-generated sequences using images from LAION (to be released soon).
This model has cross-attention modules inserted in *every other* decoder block. It was trained using FullyShardedDataParallel across 64 A100 40GB GPUs at FP32 precision.
## Uses
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
### Initialization
``` python
from open_flamingo import create_model_and_transforms
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path="togethercomputer/RedPajama-INCITE-Base-3B-v1",
tokenizer_path="togethercomputer/RedPajama-INCITE-Base-3B-v1",
cross_attn_every_n_layers=2
)
# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-4B-vitl-rpj3b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)
```
### Generation example
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
``` python
from PIL import Image
import requests
"""
Step 1: Load images
"""
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
stream=True
).raw
)
"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
batch_size x num_media x num_frames x channels x height x width.
In this case batch_size = 1, num_media = 3, num_frames = 1,
channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)
"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
We also expect an <|endofchunk|> special token to indicate the end of the text
portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
"""
Step 4: Generate text
"""
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", tokenizer.decode(generated_text[0]))
```
### Bias, Risks, and Limitations
OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.
## Evaluation
<table>
<tr>
<th></th>
<th>0-shot</th>
<th>4-shot</th>
<th>8-shot</th>
<th>16-shot</th>
<th>32-shot</th>
</tr>
<tr>
<th>COCO (CIDEr)</th>
<td>76.7 (0.2)</td>
<td>81.8 (0.4)</td>
<td>90.7 (0.3)</td>
<td>93.9 (0.4)</td>
<td>95.1 (0.3)</td>
</tr>
<tr>
<th>VQAv2 (Accuracy)</th>
<td>45.7 (0.2)</td>
<td>49.1 (0.1)</td>
<td>47.1 (0.1)</td>
<td>45.8 (0.1)</td>
<td>43.1 (0.5)</td>
</tr>
<tr>
<th>Flickr-30K (CIDEr)</th>
<td>53.6 (0.9)</td>
<td>60.7 (1.2)</td>
<td>55.9 (1.3)</td>
<td>56.8 (0.5)</td>
<td>56.9 (0.7)</td>
</tr>
<tr>
<th>OK-VQA (Accuracy)</th>
<td>28.2 (0.3)</td>
<td>33.9 (0.3)</td>
<td>31.0 (0.3)</td>
<td>30.0 (0.2)</td>
<td>25.8 (0.6)</td>
</tr>
<tr>
<th>TextVQA (Accuracy)</th>
<td>21.0 (0.3)</td>
<td>25.9 (0.0)</td>
<td>21.3 (0.2)</td>
<td>18.2 (0.4)</td>
<td>14.1 (0.2)</td>
</tr>
<tr>
<th>Vizwiz (Accuracy)</th>
<td>15.4 (0.3)</td>
<td>23.2 (0.5)</td>
<td>26.8 (0.7)</td>
<td>34.2 (1.4)</td>
<td>39.9 (0.6)</td>
</tr>
<tr>
<th>Hateful Memes (ROC AUC)</th>
<td>53.9 (2.9)</td>
<td>54.8 (1.2)</td>
<td>55.9 (2.5)</td>
<td>56.7 (0.6)</td>
<td>56.2 (2.0)</td>
</tr>
</
|
openflamingo/OpenFlamingo-9B-vitl-mpt7b | openflamingo | 2023-08-05T18:27:50Z | 0 | 41 | null | [
"en",
"dataset:laion2b",
"arxiv:2308.01390",
"arxiv:2210.08402",
"arxiv:2304.06939",
"region:us"
] | null | 2023-06-13T21:22:51Z | ---
language: en
datasets:
- laion2b
---
# OpenFlamingo-9B (CLIP ViT-L/14, MPT-7B)
[Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
This 9B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) language model.
## Model Details
We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
This model has cross-attention modules inserted in *every fourth* decoder block. It was trained using DistributedDataParallel across 64 A100 80GB GPUs at automatic BF16 mixed precision.
To use these MPT weights, OpenFlamingo must be initialized using revision `68e1a8e0ebb9b30f3c45c1ef6195980f29063ae2` of the MPT-7B modeling code. We suggest using [this copy of the model](https://huggingface.co/anas-awadalla/mpt-7b) to ensure the code is loaded at that commit.
## Uses
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
### Initialization
``` python
from open_flamingo import create_model_and_transforms
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path="anas-awadalla/mpt-7b",
tokenizer_path="anas-awadalla/mpt-7b",
cross_attn_every_n_layers=4
)
# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-9B-vitl-mpt7b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)
```
### Generation example
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
``` python
from PIL import Image
import requests
"""
Step 1: Load images
"""
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
stream=True
).raw
)
"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
batch_size x num_media x num_frames x channels x height x width.
In this case batch_size = 1, num_media = 3, num_frames = 1,
channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)
"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
We also expect an <|endofchunk|> special token to indicate the end of the text
portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
"""
Step 4: Generate text
"""
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", tokenizer.decode(generated_text[0]))
```
### Bias, Risks, and Limitations
OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.
## Evaluation
<table>
<tr>
<th></th>
<th>0-shot</th>
<th>4-shot</th>
<th>8-shot</th>
<th>16-shot</th>
<th>32-shot</th>
</tr>
<tr>
<th>COCO (CIDEr)</th>
<td>79.5 (0.2)</td>
<td>89.0 (0.3)</td>
<td>96.3 (0.1)</td>
<td>98.8 (0.7)</td>
<td>99.5 (0.1)</td>
</tr>
<tr>
<th>VQAv2 (Accuracy)</th>
<td>50.3 (0.7)</td>
<td>50.5 (0.5)</td>
<td>52.8 (0.3)</td>
<td>52.3 (0.3)</td>
<td>50.5 (0.0)</td>
</tr>
<tr>
<th>Flickr-30K (CIDEr)</th>
<td>59.5 (1.0)</td>
<td>65.8 (0.6)</td>
<td>62.9 (1.0)</td>
<td>62.8 (1.0)</td>
<td>61.3 (0.7)</td>
</tr>
<tr>
<th>OK-VQA (Accuracy)</th>
<td>34.7 (0.1)</td>
<td>34.3 (0.1)</td>
<td>38.4 (0.0)</td>
<td>39.5 (0.1)</td>
<td>38.1 (0.0)</td>
</tr>
<tr>
<th>TextVQA (Accuracy)</th>
<td>24.2 (0.5)</td>
<td>28.2 (0.4)</td>
<td>29.1 (0.1)</td>
<td>27.3 (0.1)</td>
<td>23.8 (0.2)</td>
</tr>
<tr>
<th>Vizwiz (Accuracy)</th>
<td>17.7 (0.7)</td>
<td>23.1 (0.9)</td>
<td>31.6 (1.5)</td>
<td>38.0 (1.1)</td>
<td>40.2 (0.7)</td>
</tr>
<tr>
<th>Hateful Memes (ROC AUC)</th>
<td>50.8 (4.7)</td>
<td>47.5 (2.2)</td>
<td>45.2 (2.7)</td>
<td>46.9 (3.8)</td>
<td>52.0 (2.1)</td>
</tr>
</table
|
openflamingo/OpenFlamingo-3B-vitl-mpt1b-langinstruct | openflamingo | 2023-08-05T18:27:38Z | 0 | 5 | null | [
"en",
"dataset:laion2b",
"arxiv:2308.01390",
"arxiv:2210.08402",
"arxiv:2304.06939",
"region:us"
] | null | 2023-06-13T21:21:30Z | ---
language: en
datasets:
- laion2b
---
# OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B-Dolly)
[Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction-tuned [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) language model.
## Model Details
We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
This model has cross-attention modules inserted in *every* decoder block. It was trained using DistributedDataParallel across 64 A100 40GB GPUs at FP32 precision.
The [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) modeling code does not accept the `labels` kwarg and compute cross-entropy loss within `forward()`. To train with the OpenFlamingo codebase, we suggest using a version with the `labels` kwarg [here](https://huggingface.co/anas-awadalla/mpt-1b-redpajama-200b-dolly).
## Uses
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
### Initialization
``` python
from open_flamingo import create_model_and_transforms
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path="anas-awadalla/mpt-1b-redpajama-200b-dolly",
tokenizer_path="anas-awadalla/mpt-1b-redpajama-200b-dolly",
cross_attn_every_n_layers=1
)
# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-3B-vitl-mpt1b-langinstruct", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)
```
### Generation example
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
``` python
from PIL import Image
import requests
"""
Step 1: Load images
"""
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
stream=True
).raw
)
"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
batch_size x num_media x num_frames x channels x height x width.
In this case batch_size = 1, num_media = 3, num_frames = 1,
channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)
"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
We also expect an <|endofchunk|> special token to indicate the end of the text
portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
"""
Step 4: Generate text
"""
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", tokenizer.decode(generated_text[0]))
```
### Bias, Risks, and Limitations
OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.
## Evaluation
<table>
<tr>
<th></th>
<th>0-shot</th>
<th>4-shot</th>
<th>8-shot</th>
<th>16-shot</th>
<th>32-shot</th>
</tr>
<tr>
<th>COCO (CIDEr)</th>
<td>74.4 (0.6)</td>
<td>82.7 (0.7)</td>
<td>87.8 (0.5)</td>
<td>91.9 (0.3)</td>
<td>94.8 (0.3)</td>
</tr>
<tr>
<th>VQAv2 (Accuracy)</th>
<td>44.8 (0.7)</td>
<td>46.8 (0.5)</td>
<td>46.9 (0.9)</td>
<td>46.8 (0.7)</td>
<td>46.5 (0.5)</td>
</tr>
<tr>
<th>Flickr-30K (CIDEr)</th>
<td>51.2 (0.2)</td>
<td>59.1 (0.3)</td>
<td>60.7 (0.6)</td>
<td>63.0 (0.4)</td>
<td>64.5 (1.3)</td>
</tr>
<tr>
<th>OK-VQA (Accuracy)</th>
<td>26.2 (0.3)</td>
<td>31.9 (0.2)</td>
<td>31.4 (0.4)</td>
<td>31.6 (0.3)</td>
<td>31.0 (0.1)</td>
</tr>
<tr>
<th>TextVQA (Accuracy)</th>
<td>23.1 (0.2)</td>
<td>28.1 (0.4)</td>
<td>29.1 (0.1)</td>
<td>29.1 (0.1)</td>
<td>28.5 (0.1)</td>
</tr>
<tr>
<th>Vizwiz (Accuracy)</th>
<td>18.0 (0.6)</td>
<td>22.0 (0.8)</td>
<td>28.8 (1.3)</td>
<td>35.5 (0.8)</td>
<td>41.3 (0.5)</td>
</tr>
<tr>
<th>Hateful Memes (ROC AUC)</th>
<td>54.3 (2.5)</td>
<td>53.5 (1.1)</td>
<td>52.1 (2.6)</td>
<td>52.3 (3.0)</td>
<td>51.0 (2.3)</td>
</tr>
</table>
|
openflamingo/OpenFlamingo-3B-vitl-mpt1b | openflamingo | 2023-08-05T18:27:20Z | 0 | 11 | null | [
"en",
"dataset:laion2b",
"arxiv:2308.01390",
"arxiv:2210.08402",
"arxiv:2304.06939",
"region:us"
] | null | 2023-06-13T21:22:05Z | ---
language: en
datasets:
- laion2b
---
# OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B)
[Paper](https://arxiv.org/abs/2308.01390) | [Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b) language model.
## Model Details
We follow the Flamingo modeling paradigm, outfitting the layers of a pretrained, frozen language model such that they cross-attend to visual features when decoding. Following Flamingo, we freeze the vision encoder and language model but train the connecting modules on web-scraped image-text sequences. Specifically, we trained this model on a mixture of [LAION-2B](https://arxiv.org/abs/2210.08402) and [Multimodal C4](https://arxiv.org/abs/2304.06939).
This model has cross-attention modules inserted in *every* decoder block. It was trained using DistributedDataParallel across 64 A100 80GB GPUs at FP32 precision.
The [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b) modeling code does not accept the `labels` kwarg and compute cross-entropy loss within `forward()`. To train with the OpenFlamingo codebase, we suggest a version with the `labels` kwarg [here](https://huggingface.co/anas-awadalla/mpt-1b-redpajama-200b).
## Uses
OpenFlamingo models process arbitrarily interleaved sequences of images and text to output text. This allows the models to accept in-context examples and undertake tasks like captioning, visual question answering, and image classification.
### Initialization
``` python
from open_flamingo import create_model_and_transforms
model, image_processor, tokenizer = create_model_and_transforms(
clip_vision_encoder_path="ViT-L-14",
clip_vision_encoder_pretrained="openai",
lang_encoder_path="anas-awadalla/mpt-1b-redpajama-200b",
tokenizer_path="anas-awadalla/mpt-1b-redpajama-200b",
cross_attn_every_n_layers=1
)
# grab model checkpoint from huggingface hub
from huggingface_hub import hf_hub_download
import torch
checkpoint_path = hf_hub_download("openflamingo/OpenFlamingo-3B-vitl-mpt1b", "checkpoint.pt")
model.load_state_dict(torch.load(checkpoint_path), strict=False)
```
### Generation example
Below is an example of generating text conditioned on interleaved images/text. In particular, let's try few-shot image captioning.
``` python
from PIL import Image
import requests
"""
Step 1: Load images
"""
demo_image_one = Image.open(
requests.get(
"http://images.cocodataset.org/val2017/000000039769.jpg", stream=True
).raw
)
demo_image_two = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028137.jpg",
stream=True
).raw
)
query_image = Image.open(
requests.get(
"http://images.cocodataset.org/test-stuff2017/000000028352.jpg",
stream=True
).raw
)
"""
Step 2: Preprocessing images
Details: For OpenFlamingo, we expect the image to be a torch tensor of shape
batch_size x num_media x num_frames x channels x height x width.
In this case batch_size = 1, num_media = 3, num_frames = 1,
channels = 3, height = 224, width = 224.
"""
vision_x = [image_processor(demo_image_one).unsqueeze(0), image_processor(demo_image_two).unsqueeze(0), image_processor(query_image).unsqueeze(0)]
vision_x = torch.cat(vision_x, dim=0)
vision_x = vision_x.unsqueeze(1).unsqueeze(0)
"""
Step 3: Preprocessing text
Details: In the text we expect an <image> special token to indicate where an image is.
We also expect an <|endofchunk|> special token to indicate the end of the text
portion associated with an image.
"""
tokenizer.padding_side = "left" # For generation padding tokens should be on the left
lang_x = tokenizer(
["<image>An image of two cats.<|endofchunk|><image>An image of a bathroom sink.<|endofchunk|><image>An image of"],
return_tensors="pt",
)
"""
Step 4: Generate text
"""
generated_text = model.generate(
vision_x=vision_x,
lang_x=lang_x["input_ids"],
attention_mask=lang_x["attention_mask"],
max_new_tokens=20,
num_beams=3,
)
print("Generated text: ", tokenizer.decode(generated_text[0]))
```
### Bias, Risks, and Limitations
OpenFlamingo models inherit the risks of their parent models, especially the language model. As an open-source research effort, we highly value open, accessible, reproducible multimodal model research; however, it is crucial to be aware that these models are trained on web data, have not been finetuned for safety, and thus may produce unintended, inappropriate, unreliable, and/or inaccurate outputs. Please use caution before deploying OpenFlamingo models in real applications. We also hope that OpenFlamingo enables further safety and reliability research to address these issues.
In an effort to mitigate current potential biases and harms, we have deployed a text content filter on model outputs in the OpenFlamingo demo. We continue to red-team the model to understand and improve its safety.
## Evaluation
<table>
<tr>
<th></th>
<th>0-shot</th>
<th>4-shot</th>
<th>8-shot</th>
<th>16-shot</th>
<th>32-shot</th>
</tr>
<tr>
<th>COCO (CIDEr)</th>
<td>74.9 (0.2)</td>
<td>77.3 (0.3)</td>
<td>85.9 (0.6)</td>
<td>89.8 (0.2)</td>
<td>93.0 (0.6)</td>
</tr>
<tr>
<th>Flickr-30K (CIDEr)</th>
<td>52.3 (1.0)</td>
<td>57.2 (0.4)</td>
<td>58.6 (1.1)</td>
<td>59.2 (0.5)</td>
<td>61.1 (1.3)</td>
</tr>
<tr>
<th>VQAv2 (Accuracy)</th>
<td>44.6 (0.7)</td>
<td>45.9 (0.7)</td>
<td>45.8 (0.5)</td>
<td>45.5 (0.2)</td>
<td>45.8 (0.4)</td>
</tr>
<tr>
<th>OK-VQA (Accuracy)</th>
<td>26.8 (0.3)</td>
<td>27.6 (0.2)</td>
<td>27.7 (0.1)</td>
<td>28.4 (0.1)</td>
<td>29.3 (0.2)</td>
</tr>
<tr>
<th>TextVQA (Accuracy)</th>
<td>22.8 (0.2)</td>
<td>25.8 (0.2)</td>
<td>24.7 (0.1)</td>
<td>25.2 (0.2)</td>
<td>26.3 (0.2)</td>
</tr>
<tr>
<th>Vizwiz (Accuracy)</th>
<td>18.3 (0.6)</td>
<td>23.3 (1.1)</td>
<td>31.8 (0.7)</td>
<td>38.4 (1.1)</td>
<td>42.1 (0.6)</td>
</td>
</tr>
<tr>
<th>Hateful Memes (ROC AUC)</th>
<td>51.4 (3.3)</td>
<td>51.4 (0.6)</td>
<td>52.1 (0.7)</td>
<td>51.6 (1.1)</td>
<td>51.6 (1.6)</td>
</tr>
</table>
|
xyu1163/Testmodel_sentiment | xyu1163 | 2023-08-05T18:18:57Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:xyu1163/Testmodel_sentiment",
"base_model:finetune:xyu1163/Testmodel_sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T11:13:32Z | ---
license: apache-2.0
base_model: xyu1163/Testmodel_sentiment
tags:
- generated_from_trainer
model-index:
- name: Testmodel_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Testmodel_sentiment
This model is a fine-tuned version of [xyu1163/Testmodel_sentiment](https://huggingface.co/xyu1163/Testmodel_sentiment) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
jowid100/FineTunedBERTArgument | jowid100 | 2023-08-05T18:13:51Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T18:10:45Z | ## Model Details
Finetuned BERT model for Argument mining.
|
arhamk/ppo-Pyramids | arhamk | 2023-08-05T17:50:26Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-08-05T17:38:13Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: arhamk/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Indra99-01/food_semeval_bigscience_bloomz-560m_PROMPT_TUNING_CAUSAL_LM_v1_50.pt | Indra99-01 | 2023-08-05T17:48:55Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T17:48:54Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
MattStammers/a2c-PandaReachDense-v2-take2 | MattStammers | 2023-08-05T17:41:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T14:31:08Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.97 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CyberHarem/anis_nikke | CyberHarem | 2023-08-05T17:41:09Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/anis_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T17:36:33Z | ---
license: mit
datasets:
- CyberHarem/anis_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of anis_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/anis_nikke.pt` as the embedding and `1500/anis_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `anis_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------|:-----------------------------------------------|:--------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/bikini.png) |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/anis_nikke.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/bikini.png) |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/anis_nikke.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/bikini.png) |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/anis_nikke.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/bikini.png) |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/anis_nikke.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/bikini.png) |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/anis_nikke.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/bikini.png) |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/anis_nikke.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/bikini.png) |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/anis_nikke.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/bikini.png) |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/anis_nikke.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/bikini.png) |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/anis_nikke.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/bikini.png) |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/anis_nikke.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/bikini.png) |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/anis_nikke.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/bikini.png) |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/anis_nikke.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/bikini.png) |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/anis_nikke.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/bikini.png) |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/anis_nikke.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/bikini.png) |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/anis_nikke.zip) |
|
LovenOO/distilBERT_with_preprocessing | LovenOO | 2023-08-05T17:34:38Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T15:03:17Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LovenOO/distilBERT_with_preprocessing
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LovenOO/distilBERT_with_preprocessing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2890
- Validation Loss: 0.6104
- Train Accuracy: 0.8264
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2545, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6308 | 0.6631 | 0.8136 | 0 |
| 0.4767 | 0.6222 | 0.8264 | 1 |
| 0.3731 | 0.6148 | 0.8308 | 2 |
| 0.3117 | 0.6104 | 0.8264 | 3 |
| 0.2875 | 0.6104 | 0.8264 | 4 |
| 0.2890 | 0.6104 | 0.8264 | 5 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.13.0
- Datasets 2.14.2
- Tokenizers 0.11.0
|
louie27/llama2-qlora-finetunined-french | louie27 | 2023-08-05T17:28:11Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T17:28:03Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
CyberHarem/alice_nikke | CyberHarem | 2023-08-05T17:21:15Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/alice_nikke",
"license:mit",
"region:us"
] | text-to-image | 2023-08-05T17:15:18Z | ---
license: mit
datasets:
- CyberHarem/alice_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of alice_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/alice_nikke.pt` as the embedding and `1500/alice_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `alice_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:----------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 | [<NSFW, click to see>](1500/previews/pattern_1.png) |  | [<NSFW, click to see>](1500/previews/pattern_3.png) | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/free.png) | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/alice_nikke.zip) |
| 1400 | [<NSFW, click to see>](1400/previews/pattern_1.png) |  | [<NSFW, click to see>](1400/previews/pattern_3.png) | [<NSFW, click to see>](1400/previews/bikini.png) | [<NSFW, click to see>](1400/previews/free.png) | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/alice_nikke.zip) |
| 1300 | [<NSFW, click to see>](1300/previews/pattern_1.png) |  | [<NSFW, click to see>](1300/previews/pattern_3.png) | [<NSFW, click to see>](1300/previews/bikini.png) | [<NSFW, click to see>](1300/previews/free.png) | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/alice_nikke.zip) |
| 1200 | [<NSFW, click to see>](1200/previews/pattern_1.png) |  | [<NSFW, click to see>](1200/previews/pattern_3.png) | [<NSFW, click to see>](1200/previews/bikini.png) | [<NSFW, click to see>](1200/previews/free.png) | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/alice_nikke.zip) |
| 1100 | [<NSFW, click to see>](1100/previews/pattern_1.png) |  | [<NSFW, click to see>](1100/previews/pattern_3.png) | [<NSFW, click to see>](1100/previews/bikini.png) | [<NSFW, click to see>](1100/previews/free.png) | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/alice_nikke.zip) |
| 1000 | [<NSFW, click to see>](1000/previews/pattern_1.png) |  | [<NSFW, click to see>](1000/previews/pattern_3.png) | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/free.png) | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/alice_nikke.zip) |
| 900 | [<NSFW, click to see>](900/previews/pattern_1.png) |  | [<NSFW, click to see>](900/previews/pattern_3.png) | [<NSFW, click to see>](900/previews/bikini.png) | [<NSFW, click to see>](900/previews/free.png) | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/alice_nikke.zip) |
| 800 | [<NSFW, click to see>](800/previews/pattern_1.png) |  | [<NSFW, click to see>](800/previews/pattern_3.png) | [<NSFW, click to see>](800/previews/bikini.png) | [<NSFW, click to see>](800/previews/free.png) | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/alice_nikke.zip) |
| 700 | [<NSFW, click to see>](700/previews/pattern_1.png) |  | [<NSFW, click to see>](700/previews/pattern_3.png) | [<NSFW, click to see>](700/previews/bikini.png) | [<NSFW, click to see>](700/previews/free.png) | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/alice_nikke.zip) |
| 600 | [<NSFW, click to see>](600/previews/pattern_1.png) |  | [<NSFW, click to see>](600/previews/pattern_3.png) | [<NSFW, click to see>](600/previews/bikini.png) | [<NSFW, click to see>](600/previews/free.png) | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/alice_nikke.zip) |
| 500 | [<NSFW, click to see>](500/previews/pattern_1.png) |  | [<NSFW, click to see>](500/previews/pattern_3.png) | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/free.png) | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/alice_nikke.zip) |
| 400 | [<NSFW, click to see>](400/previews/pattern_1.png) |  | [<NSFW, click to see>](400/previews/pattern_3.png) | [<NSFW, click to see>](400/previews/bikini.png) | [<NSFW, click to see>](400/previews/free.png) | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/alice_nikke.zip) |
| 300 | [<NSFW, click to see>](300/previews/pattern_1.png) |  | [<NSFW, click to see>](300/previews/pattern_3.png) | [<NSFW, click to see>](300/previews/bikini.png) | [<NSFW, click to see>](300/previews/free.png) | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/alice_nikke.zip) |
| 200 | [<NSFW, click to see>](200/previews/pattern_1.png) |  | [<NSFW, click to see>](200/previews/pattern_3.png) | [<NSFW, click to see>](200/previews/bikini.png) | [<NSFW, click to see>](200/previews/free.png) | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/alice_nikke.zip) |
| 100 | [<NSFW, click to see>](100/previews/pattern_1.png) |  | [<NSFW, click to see>](100/previews/pattern_3.png) | [<NSFW, click to see>](100/previews/bikini.png) | [<NSFW, click to see>](100/previews/free.png) | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/alice_nikke.zip) |
|
Eitanli/distilbert-qa-checkpoint-v4 | Eitanli | 2023-08-05T17:20:44Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-05T17:06:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-qa-checkpoint-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-qa-checkpoint-v4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0541 | 1.0 | 1083 | 0.9490 |
| 0.0494 | 2.0 | 2166 | 0.9200 |
| 0.0913 | 3.0 | 3249 | 0.6719 |
| 0.0935 | 4.0 | 4332 | 0.6882 |
| 0.0768 | 5.0 | 5415 | 0.6854 |
| 0.0732 | 6.0 | 6498 | 0.7032 |
| 0.0768 | 7.0 | 7581 | 0.6902 |
| 0.0755 | 8.0 | 8664 | 0.8092 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
oljike/all-kzkhs-lora | oljike | 2023-08-05T16:53:16Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-08-05T10:02:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - oljike/all-kzkhs-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the ../../../data/people/all dataset. You can find some example images in the following.

|
nokotin/pyramids | nokotin | 2023-08-05T16:46:54Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-08-05T16:46:46Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nokotin/pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
VicBeltran/taxi-V3-QlearningModel | VicBeltran | 2023-08-05T16:46:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T16:46:50Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-V3-QlearningModel
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="VicBeltran/taxi-V3-QlearningModel", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
VicBeltran/q-FrozenLake-v1-4x4-noSlippery | VicBeltran | 2023-08-05T16:41:33Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T16:41:30Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="VicBeltran/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
w11wo/sundanese-bert-base-emotion-classifier | w11wo | 2023-08-05T16:06:54Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"sundanese-bert-base-emotion-classifier",
"su",
"arxiv:1810.04805",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language: su
tags:
- sundanese-bert-base-emotion-classifier
license: mit
widget:
- text: "Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah"
---
## Sundanese BERT Base Emotion Classifier
Sundanese BERT Base Emotion Classifier is an emotion-text-classification model based on the [BERT](https://arxiv.org/abs/1810.04805) model. The model was originally the pre-trained [Sundanese BERT Base Uncased](https://hf.co/luche/bert-base-sundanese-uncased) model trained by [`@luche`](https://hf.co/luche), which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 96.82% and F1-macro of 96.75%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------- | ------- | --------- | ------------------------------- |
| `sundanese-bert-base-emotion-classifier` | 110M | BERT Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.759800 | 0.263913 | 0.924603 | 0.925042 | 0.928426 | 0.926130 |
| 2 | 0.213100 | 0.456022 | 0.908730 | 0.906732 | 0.924141 | 0.907846 |
| 3 | 0.091900 | 0.204323 | 0.956349 | 0.955896 | 0.956226 | 0.956248 |
| 4 | 0.043800 | 0.219143 | 0.956349 | 0.955705 | 0.955848 | 0.956392 |
| 5 | 0.013700 | 0.247289 | 0.960317 | 0.959734 | 0.959477 | 0.960782 |
| 6 | 0.004800 | 0.286636 | 0.956349 | 0.955540 | 0.956519 | 0.956615 |
| 7 | 0.000200 | 0.243408 | 0.960317 | 0.959085 | 0.959145 | 0.959310 |
| 8 | 0.001500 | 0.232138 | 0.960317 | 0.959451 | 0.959427 | 0.959997 |
| 9 | 0.000100 | 0.215523 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
| 10 | 0.000100 | 0.216533 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-bert-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah")
```
## Disclaimer
Do consider the biases which come from both the pre-trained BERT model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese BERT Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
nokotin/SnowballTarget | nokotin | 2023-08-05T16:06:23Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-08-05T16:06:16Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nokotin/SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
anniedong/projectile-flan-t5-v1 | anniedong | 2023-08-05T15:54:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T15:48:05Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
arindamatcalgm/w266_model2_BERT_LSTM_1 | arindamatcalgm | 2023-08-05T15:48:53Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-02T04:44:54Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: w266_model2_BERT_LSTM_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w266_model2_BERT_LSTM_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6673
- Accuracy: {'accuracy': 0.586}
- F1: {'f1': 0.5941271393567649}
- Precision: {'precision': 0.6305594263991693}
- Recall: {'recall': 0.586}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:--------------------------:|:---------------------------------:|:------------------------------:|
| No log | 1.0 | 125 | 2.7886 | {'accuracy': 0.563} | {'f1': 0.5737642190234387} | {'precision': 0.6070380044002861} | {'recall': 0.563} |
| No log | 2.0 | 250 | 3.2762 | {'accuracy': 0.567} | {'f1': 0.5732065475023022} | {'precision': 0.6124992011023714} | {'recall': 0.567} |
| No log | 3.0 | 375 | 3.1370 | {'accuracy': 0.57} | {'f1': 0.5799666523302439} | {'precision': 0.6122839339063632} | {'recall': 0.57} |
| 0.0465 | 4.0 | 500 | 3.3590 | {'accuracy': 0.569} | {'f1': 0.5796357806282344} | {'precision': 0.6093440842818532} | {'recall': 0.5689999999999998} |
| 0.0465 | 5.0 | 625 | 3.4285 | {'accuracy': 0.57} | {'f1': 0.580483223593091} | {'precision': 0.618976915416096} | {'recall': 0.57} |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
xuqinyang/baichuan-13b-chat-ggml-int4 | xuqinyang | 2023-08-05T15:47:28Z | 0 | 6 | null | [
"text-generation",
"doi:10.57967/hf/0963",
"region:us"
] | text-generation | 2023-07-12T04:25:34Z | ---
pipeline_tag: text-generation
---
详细用法请查看:https://github.com/ouwei2013/baichuan13b.cpp |
enryu43/anifusion_augmenter | enryu43 | 2023-08-05T15:33:18Z | 209 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-10-11T19:43:12Z | Autoregressive prompt augmenter for https://medium.com/@enryu9000/anifusion-diffusion-models-for-anime-pictures-138cf1af2cbe.
|
hopkins/eng-deu-trial6 | hopkins | 2023-08-05T15:32:57Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-08-05T15:18:31Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-trial6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-trial6
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6328
- Bleu: 21.3888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tommilyjones/bert-base-uncased-finetuned-hateful-meme | tommilyjones | 2023-08-05T15:24:08Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T15:18:02Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-hateful-meme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-hateful-meme
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0538
- Accuracy: 0.544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5795 | 1.0 | 532 | 0.7869 | 0.564 |
| 0.5101 | 2.0 | 1064 | 0.8646 | 0.56 |
| 0.4455 | 3.0 | 1596 | 0.9011 | 0.538 |
| 0.3926 | 4.0 | 2128 | 1.1856 | 0.542 |
| 0.3387 | 5.0 | 2660 | 1.1351 | 0.552 |
| 0.3056 | 6.0 | 3192 | 1.3704 | 0.55 |
| 0.2942 | 7.0 | 3724 | 1.7288 | 0.538 |
| 0.2665 | 8.0 | 4256 | 1.7215 | 0.544 |
| 0.2498 | 9.0 | 4788 | 1.8634 | 0.542 |
| 0.2357 | 10.0 | 5320 | 2.0538 | 0.544 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
hannnnni/piggy | hannnnni | 2023-08-05T15:18:03Z | 0 | 3 | null | [
"region:us"
] | null | 2023-07-14T11:50:03Z | # 🐖-rvc-v2-model
原先使用 sovits4.1 的 pretrained model
重新 train 了一個 rvc-v2 的 model 電子音減少了很多
https://colab.research.google.com/drive/1r4IRL0UA7JEoZ0ZK8PKfMyTIBHKpyhcw
進入 colab 執行第一個 cell

點選public url

進入download model 頁面貼上 model 網址
https://huggingface.co/hannnnni/piggy/resolve/main/tone-voice.zip
or
https://huggingface.co/hannnnni/piggy/resolve/main/dong-voice.zip
dong-voice.zip 只 train 了 150 個 epochs,有點懶得再train下去
進入 inference 頁面上傳欲轉換的 audio
建議單一 auido 長度30秒

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64a7d6cf76d0a6cbbc3fff36/zSLZrHuzxj8rrM0ICqOd1.wav"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64a7d6cf76d0a6cbbc3fff36/7W7pVBCAXQ842990u4ByU.wav"></audio>
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/64a7d6cf76d0a6cbbc3fff36/sNxy1oJ2_gLIzsH16Bci1.wav"></audio>
vocal remover:
分離 instrumental vocal
https://ultimatevocalremover.com/ |
arhamk/ppo-SnowballTarget | arhamk | 2023-08-05T15:17:29Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-08-05T15:17:23Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: arhamk/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mrkushrz/Llama2_PA_FRA-UAS-FAQ-v2 | mrkushrz | 2023-08-05T15:11:08Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:abhishek/llama-2-7b-hf-small-shards",
"base_model:finetune:abhishek/llama-2-7b-hf-small-shards",
"region:us"
] | null | 2023-08-04T10:19:58Z | ---
base_model: abhishek/llama-2-7b-hf-small-shards
tags:
- generated_from_trainer
model-index:
- name: Llama2_PA_FRA-UAS-FAQ-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2_PA_FRA-UAS-FAQ-v2
This model is a fine-tuned version of [abhishek/llama-2-7b-hf-small-shards](https://huggingface.co/abhishek/llama-2-7b-hf-small-shards) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 93
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
DavidGetter1/falcon_horror_small | DavidGetter1 | 2023-08-05T15:01:28Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T15:00:50Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
LinkSoul/LLaSM-Baichuan | LinkSoul | 2023-08-05T14:52:51Z | 28 | 9 | transformers | [
"transformers",
"pytorch",
"llaaa",
"text-generation",
"zh",
"en",
"dataset:LinkSoul/LLaSM-Audio-Instructions",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-26T04:12:53Z | ---
license: openrail
datasets:
- LinkSoul/LLaSM-Audio-Instructions
language:
- zh
- en
---
# LLaSM: Large Language and Speech Model
开源,可商用的**中英文双语语音-语言助手 LLaSM 以及中英文语音 SFT 数据集 LLaSM-Audio-Instructions**,第一个支持中英文语音-文本多模态对话的开源可商用对话模型。
<!--
<p align="center">
<img src="meta/llasm_preview.jpg" width="40%">
</p>
-->

## 基础演示

## 在线试玩
> Talk is cheap, Show you the Demo.
- [Demo 地址 / HuggingFace Spaces](https://huggingface.co/spaces/LinkSoul/LLaSM)
## 资源下载
- 模型:
- [LLaSM-Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/LLaSM-Cllama2)
- [LLaSM-Baichuan-7B](https://huggingface.co/LinkSoul/LLaSM-Baichuan)
- 百度网盘下载:
- [LLaSM-Chinese-Llama-2-7B](https://pan.baidu.com/s/1PaipNDfqV7f3W1-tl5rwzA?pwd=2549)
- [LLaSM-Baichuan-7B](https://pan.baidu.com/s/1QZrXA8IJXclN77T4jM7tEw?pwd=y2p7)
- 语言模型:
- [Chinese-Llama-2-7b](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b)
- [Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
- 数据集:[LLaSM-Audio-Instructions](https://huggingface.co/datasets/LinkSoul/LLaSM-Audio-Instructions)
## 环境安装
```shell
# clone the repository
git clone https://github.com/LinkSoul-AI/LLaSM
cd LLaSM
# install package
conda create -n llasm python=3.10 -y
conda activate llasm
pip install --upgrade pip
pip install -e .
```
## 快速测试
```shell
export LLASM_DEVICE="cuda:0"
python infer.py \
--input_audio_file PATH/TO/YOUR/AUDIO \
--llasm_model PATH/TO/LLaSM/MODEL \
--llasm_audio_tower PATH/TO/WHISPER/MODEL \
--llm_type "Chinese_llama2" or "baichuan" \
```
## TODO
- 如何训练
- int4 量化
- docker 部署
## 相关项目
- [Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b)
- [Whisper](https://ai.meta.com/llama/)
- [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
## 项目协议
[Apache-2.0 license](https://github.com/LinkSoul-AI/LLaSM/blob/main/LICENSE)
## 微信交流群
<!--
<img src="meta/QRcode.jpg" alt="微信交流群" width="300"/>
-->
欢迎加入[微信群](meta/QRcode.jpg) |
LinkSoul/LLaSM-Cllama2 | LinkSoul | 2023-08-05T14:52:34Z | 27 | 48 | transformers | [
"transformers",
"pytorch",
"llaaa",
"text-generation",
"zh",
"en",
"dataset:LinkSoul/LLaSM-Audio-Instructions",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-30T02:39:03Z | ---
license: openrail
datasets:
- LinkSoul/LLaSM-Audio-Instructions
language:
- zh
- en
---
# LLaSM: Large Language and Speech Model
开源,可商用的**中英文双语语音-语言助手 LLaSM 以及中英文语音 SFT 数据集 LLaSM-Audio-Instructions**,第一个支持中英文语音-文本多模态对话的开源可商用对话模型。
<!--
<div align="center">
<img src="https://huggingface.co/LinkSoul/LLaSM-Cllama2/blob/main/meta/preview.jpg" width="40%">
</div>
-->

## 基础演示

## 在线试玩
> Talk is cheap, Show you the Demo.
- [Demo 地址 / HuggingFace Spaces](https://huggingface.co/spaces/LinkSoul/LLaSM)
## 资源下载
- 模型:
- [LLaSM-Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/LLaSM-Cllama2)
- [LLaSM-Baichuan-7B](https://huggingface.co/LinkSoul/LLaSM-Baichuan)
- 百度网盘下载:
- [LLaSM-Chinese-Llama-2-7B](https://pan.baidu.com/s/1PaipNDfqV7f3W1-tl5rwzA?pwd=2549)
- [LLaSM-Baichuan-7B](https://pan.baidu.com/s/1QZrXA8IJXclN77T4jM7tEw?pwd=y2p7)
- 语言模型:
- [Chinese-Llama-2-7b](https://github.com/LinkSoul-AI/Chinese-Llama-2-7b)
- [Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
- 数据集:[LLaSM-Audio-Instructions](https://huggingface.co/datasets/LinkSoul/LLaSM-Audio-Instructions)
## 环境安装
```shell
# clone the repository
git clone https://github.com/LinkSoul-AI/LLaSM
cd LLaSM
# install package
conda create -n llasm python=3.10 -y
conda activate llasm
pip install --upgrade pip
pip install -e .
```
## 快速测试
```shell
export LLASM_DEVICE="cuda:0"
python infer.py \
--input_audio_file PATH/TO/YOUR/AUDIO \
--llasm_model PATH/TO/LLaSM/MODEL \
--llasm_audio_tower PATH/TO/WHISPER/MODEL \
--llm_type "Chinese_llama2" or "baichuan" \
```
## TODO
- 如何训练
- int4 量化
- docker 部署
## 相关项目
- [Chinese-Llama-2-7B](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b)
- [Whisper](https://ai.meta.com/llama/)
- [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
## 项目协议
[Apache-2.0 license](https://github.com/LinkSoul-AI/LLaSM/blob/main/LICENSE)
## 微信交流群
<!--
<img src="meta/QRcode.jpg" alt="微信交流群" width="300"/>
-->
欢迎加入[微信群](meta/QRcode.jpg) |
capeie/capeie-llama-openorca-lora | capeie | 2023-08-05T14:46:15Z | 5 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T14:46:09Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
mirantha/SalusPolicyLLM | mirantha | 2023-08-05T14:15:23Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T14:14:01Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Lukee4/test-2019 | Lukee4 | 2023-08-05T14:13:49Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T14:13:47Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jointitor/model-3 | jointitor | 2023-08-05T13:57:17Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-05T13:52:43Z | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0iwCO1QAAAob/",
"context":"NMru9C5UU7dPVz2UczmJGUoUOec2BET6ZVY5WZT/Q8TgBS7ccwrkVqFz/HPw6n3R7I8zfk2/OdH8/bScmGZ8tZSfxlV3mrFjcvQHFQHE77zYIvkJyhOpIF5H6i7dlb5oIX2f3ZMJW49f3WonsdLn1N0BCB+UyitjbglqYroANSdhYzrwwnr3m97tjSA0bjvt1uoGE4/mX7e6O7SrrrerR09kA7350qSOP8VZKivK6a7kFhj8UcRs73VagjqiBMEoh2BmOXe/pRnKPBm07HYatLk9IKlfgbjpcVvGBKK8tHEz/bQFv3lztOSIFSAHEpywGk4f/KgeKXAn5Xtf08bhDN05pb+efagcIZGWDGm7SNNe/nRclt0X9w=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html> |
jointitor/model-2 | jointitor | 2023-08-05T13:47:04Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-05T13:37:26Z | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0hgCSCQAAAm5x",
"context":"tP5ZeKk+wRKQTrK3ULygOHuvgUvM108QjYPdji7LenknvA71y7X+XIANgo63VbN9BiRfw5y9kgyyP17YZIURC783MYLY3+77t50Ls15Jyf3j7v1eXFJiYeyC/BnGhD/zuoBLtVHOKjZepXZdWhlcfv0IjWbVXPHgSjmeP0kCTwRbRNPefal28+lO8JjZzqjAeOHEtiB6AcBotWMDWjFA8IOUncfQpFkRBYm2dRGGjM6Tn2CuTamv0DyB+swfYT3ROtcg7RWZjbaNGhLk+ixpQtQPIBtQ2gHAI3qZFN7Mj3UbTtrVOfc40/bQs3ZoCakIN2I8Lx6EjIDx0qT3vvhNZQ2IAsKLKs4ZEQV0U5rlqeBZjb3IRswHrQ=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html> |
ghostintheai/hassanBlend_1512_bakedvae_ft-mse-840k_ema_pruned | ghostintheai | 2023-08-05T13:39:55Z | 2 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T10:53:43Z | ---
license: creativeml-openrail-m
library_name: diffusers
---
This is hassansBlend 1.5.1.2 with the baked in vae-ft-mse-840000-ema-pruned.ckpt
I created this to use it in VisionCrafter since i didn't find the option to add a VAE file in the GUI.
Call me an amateur, i've only been doing this stuff for 3 days :D
Enjoy! And thanks to Hassan. |
javadaslanov/finetuned-new | javadaslanov | 2023-08-05T13:05:53Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-05T05:58:07Z | ---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: finetuned-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-new
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6670
- Rouge1: 23.8339
- Rouge2: 9.629
- Rougel: 20.6248
- Rougelsum: 21.936
- Gen Len: 18.9886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 50 | 1.8104 | 24.234 | 13.0619 | 21.6635 | 22.7407 | 17.8295 |
| No log | 2.0 | 100 | 1.7152 | 23.8385 | 10.4031 | 20.7556 | 21.8852 | 18.9545 |
| No log | 3.0 | 150 | 1.6795 | 23.6911 | 9.6556 | 20.5848 | 21.8707 | 18.9886 |
| No log | 4.0 | 200 | 1.6670 | 23.8339 | 9.629 | 20.6248 | 21.936 | 18.9886 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.3
|
zz0906/llama2-qlora-from_colab_test | zz0906 | 2023-08-05T13:05:37Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-05T13:05:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
soymia/meister-mindmap-model-pytorch | soymia | 2023-08-05T13:03:58Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T12:45:37Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: meister-mindmap-model-pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meister-mindmap-model-pytorch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0163
- Accuracy: 0.9971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7075 | 1.0 | 678 | 0.0548 | 0.9878 |
| 0.0613 | 2.0 | 1356 | 0.0163 | 0.9971 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.3
|
abyrush/cepio48 | abyrush | 2023-08-05T12:54:30Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-05T12:54:30Z | ---
license: creativeml-openrail-m
---
|
taohoang/whisper-tiny-en-US | taohoang | 2023-08-05T12:45:19Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-05T12:26:21Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[450:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.3435655253837072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6286
- Wer Ortho: 0.3430
- Wer: 0.3436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 10
- training_steps: 225
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 3.2798 | 0.25 | 14 | 0.9783 | 0.7218 | 0.6889 |
| 0.6283 | 0.5 | 28 | 0.5667 | 0.4479 | 0.4427 |
| 0.5574 | 0.75 | 42 | 0.5307 | 0.4812 | 0.4858 |
| 0.501 | 1.0 | 56 | 0.5130 | 0.3800 | 0.3813 |
| 0.2296 | 1.25 | 70 | 0.5057 | 0.3479 | 0.3436 |
| 0.2296 | 1.5 | 84 | 0.5515 | 0.3572 | 0.3512 |
| 0.2207 | 1.75 | 98 | 0.5356 | 0.3578 | 0.3530 |
| 0.1928 | 2.0 | 112 | 0.5288 | 0.3226 | 0.3200 |
| 0.0795 | 2.25 | 126 | 0.5532 | 0.3257 | 0.3259 |
| 0.0651 | 2.5 | 140 | 0.5833 | 0.3504 | 0.3512 |
| 0.0719 | 2.75 | 154 | 0.5931 | 0.3467 | 0.3501 |
| 0.0722 | 3.0 | 168 | 0.5994 | 0.3498 | 0.3477 |
| 0.0231 | 3.25 | 182 | 0.6030 | 0.3270 | 0.3264 |
| 0.0433 | 3.5 | 196 | 0.6059 | 0.3214 | 0.3200 |
| 0.0663 | 3.75 | 210 | 0.6262 | 0.3646 | 0.3648 |
| 0.0396 | 4.0 | 224 | 0.6286 | 0.3430 | 0.3436 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
helamri/q-FrozenLake-v1-4x4-noSlippery | helamri | 2023-08-05T12:23:35Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T12:23:31Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="helamri/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YanJiangJerry/bertweet-large_epoch6_batch4_lr2e-05_w0.01 | YanJiangJerry | 2023-08-05T12:16:14Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-large",
"base_model:finetune:vinai/bertweet-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T09:57:02Z | ---
base_model: vinai/bertweet-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bertweet-large_epoch6_batch4_lr2e-05_w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-large_epoch6_batch4_lr2e-05_w0.01
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7423
- Accuracy: 0.6274
- F1: 0.0
- Precision: 0.0
- Recall: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:|
| 0.6851 | 1.0 | 788 | 0.6628 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.678 | 2.0 | 1576 | 0.6763 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6778 | 3.0 | 2364 | 0.6613 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6732 | 4.0 | 3152 | 0.7288 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6631 | 5.0 | 3940 | 0.6935 | 0.6274 | 0.0 | 0.0 | 0.0 |
| 0.6456 | 6.0 | 4728 | 0.7423 | 0.6274 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Aspik101/30B-Lazarus-instruct-PL-lora_GGML | Aspik101 | 2023-08-05T12:12:18Z | 0 | 0 | null | [
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] | text-generation | 2023-08-05T11:17:09Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
jointitor/model-c | jointitor | 2023-08-05T12:01:17Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-05T12:01:17Z | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0QwCP3wAAAnSG",
"context":"6Nwdy8Qq/+ELZQBUlWDOdV0g1eOLLWDTFahheoS0n0zqYTrXHF+neGPxc63ZKMi5tyrzLB/ZD/LNGhSLBRBahS3eQ9J70z36y9M/taALDM/txXQYPmkHzdtmNlpUYpd3ukPjym29N6V5ExAU+Fmig8n8W1NwqkpWr6o7LTPL9E+HJZPKJGDLzDRYPDW7eG1pzyNDj3M81qdnTCosk/qHS9S8/zXPVP0JhnfgDhQXbZ+e8D7Npox5pzx7tlBsGl1SyrhRfKl4qIL+x+/bEqFqfnKE6ZaoKp0+qFheO8A2rc0OToFNK25IS3C4U8388hFnMke1d8NpKrZX5PSyD3pwNJf8RAAdrGzi4XmxGnUeoHmEfjfF7U4JOA=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html> |
fromhell01/q-FrozenLake-v1-4x4-noSlippery | fromhell01 | 2023-08-05T11:34:15Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T11:34:13Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="fromhell01/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jointitor/model-b | jointitor | 2023-08-05T11:33:15Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-05T11:31:22Z | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0kQCQQgAAAloB",
"context":"KNjViXxpiGYwgsWoJuQln7b3edSGQZsHUYYwqAWwXs9bxqLj/PsEFmFTrCvn3dj4+yHtA30KSk2sSAsGDe2bln6rlVmMB3e5tM/PjW3nG3E1o016fBAdKpfDE8OqFSq/Nlbn9Yv68z/glHWPFeGPRf2M3VgLuimgRi7FDofab1oCQo8F47TnllSnJffGQR2t4ohHx0OXGfNAZuyOY180zO0gAQ9MoDEJFWIp10afQfrrHC8EsZ4SYaBAScVJRWxIF93bbbFyJpWlyEVveveKJecEd/IDfIYe+nwAIb+8pAytFuL54OO0EiqwHwmNXqcUqljEN59cRHvRaOZbmigX1jcNWNsIiF4P5Vxr1CkeFy6Or6lwds3zHQ=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html> |
SigmaJDN/animals | SigmaJDN | 2023-08-05T11:30:00Z | 193 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-08-05T11:29:53Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: animals
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9821428656578064
---
# animals
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### cow

#### dog

#### horse

#### lion
 |
jointitor/model-a | jointitor | 2023-08-05T11:20:29Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-05T11:20:29Z | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Human Verification</title>
<style>
body {
font-family: "Arial";
}
</style>
<script type="text/javascript">
window.awsWafCookieDomainList = [];
window.gokuProps = {
"key":"AQIDAHjcYu/GjX+QlghicBgQ/7bFaQZ+m5FKCMDnO+vTbNg96AEpUrNFDgv7EldMndih6hA+AAAAfjB8BgkqhkiG9w0BBwagbzBtAgEAMGgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMF/VPr1lB/ZIV/u/8AgEQgDueNdY9Xc1NMzZo31eBDsQjyd1lLRC+CGsm8hq/ZsF73viu+NugvRnfEQZAmgPVxs5CNfjnMhuli8Jamw==",
"iv":"Cvr0RACNRwAAAm/Q",
"context":"aCpEXZS6+SHTlws806BYQYcT651GY5/iQCjIfPlDorYhkyqUHkkokSOt2K/lvAe1rVPTXWQT/oeh0XTdHBNiG2BTJNeQvBkNR7Fwg+N/h4mTxpY5gkFzeRKXwm6aJAdALoq+HvvoEJc8T5nlkDwG2XtTSwgVU9g2De6B9jr+e3f5AQ3NsxdrOyaWyW3ui+87MPqDqG7533V62B5queZxIoXwk8O8nagtrcn9oUGVx/lg3s/4Ui3RdeBXsnCVo+7qgHQjYCaDDVdfFbWjVLdZm8aB/M+t/s2Dy5KPjDT7y12ambLqk70hzZeI9mnWz7OgaOhQu/U5xOGjNjK+qlk87nHV7P8tdLGLAKOQdct5nwEoWT7D8XzKuA=="
};
</script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.token.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/challenge.js"></script>
<script src="https://de5282c3ca0c.2f8e3d4d.eu-west-2.captcha.awswaf.com/de5282c3ca0c/526cf06acb0d/1f1cc3a8127b/captcha.js"></script>
</head>
<body>
<div id="captcha-container"></div>
<script type="text/javascript">
AwsWafIntegration.saveReferrer();
window.addEventListener("load", function() {
const container = document.querySelector("#captcha-container");
CaptchaScript.renderCaptcha(container, async (voucher) => {
await ChallengeScript.submitCaptcha(voucher);
window.location.reload(true);
}
);
});
</script>
<noscript>
<h1>JavaScript is disabled</h1>
In order to continue, you need to verify that you're not a robot by solving a CAPTCHA puzzle.
The CAPTCHA puzzle requires JavaScript. Enable JavaScript and then reload the page.
</noscript>
</body>
</html> |
psxjp5/mt5-small_large_lr | psxjp5 | 2023-08-05T11:08:50Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-05T07:55:25Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: mt5-small_large_lr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small_large_lr
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9688
- Rouge1: 38.8633
- Rouge2: 33.0802
- Rougel: 37.6956
- Rougelsum: 37.7116
- Bleu: 26.6301
- Gen Len: 11.5566
- Meteor: 0.3519
- No ans accuracy: 22.99
- Av cosine sim: 0.6861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 9
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | Meteor | No ans accuracy | Av cosine sim |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:------:|:---------------:|:-------------:|
| 5.4434 | 1.0 | 175 | 2.1918 | 1.8449 | 1.2024 | 1.7039 | 1.7116 | 0.0 | 2.7672 | 0.0145 | 28.9700 | 0.1363 |
| 1.8436 | 1.99 | 350 | 1.1852 | 33.6062 | 26.8725 | 32.2258 | 32.241 | 20.3395 | 12.2528 | 0.2957 | 17.3800 | 0.636 |
| 1.2276 | 2.99 | 525 | 1.0630 | 33.186 | 27.4949 | 32.0715 | 32.0522 | 20.3232 | 11.0301 | 0.2957 | 21.18 | 0.6109 |
| 0.9589 | 3.98 | 700 | 1.0083 | 40.265 | 33.6652 | 38.9503 | 38.9661 | 28.0884 | 12.8545 | 0.3623 | 17.54 | 0.7157 |
| 0.7931 | 4.98 | 875 | 0.9682 | 37.9437 | 31.7611 | 36.7618 | 36.7671 | 25.7738 | 12.0286 | 0.3424 | 20.66 | 0.6825 |
| 0.6686 | 5.97 | 1050 | 0.9601 | 37.5742 | 31.9098 | 36.4225 | 36.4381 | 24.9584 | 11.4169 | 0.3398 | 22.56 | 0.6713 |
| 0.5686 | 6.97 | 1225 | 0.9620 | 43.1436 | 36.6363 | 41.7279 | 41.7571 | 32.4301 | 13.6142 | 0.3893 | 16.9400 | 0.757 |
| 0.4939 | 7.96 | 1400 | 0.9688 | 38.8633 | 33.0802 | 37.6956 | 37.7116 | 26.6301 | 11.5566 | 0.3519 | 22.99 | 0.6861 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MattStammers/a2c-PandaReachDense-v2 | MattStammers | 2023-08-05T11:08:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T09:41:09Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -4.37 +/- 1.32
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aityz/aityz_chatbot | Aityz | 2023-08-05T10:34:38Z | 207 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-05T09:57:36Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: aityz_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aityz_chatbot
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 467 | 75.0160 |
| 93.9474 | 2.0 | 934 | 9.0902 |
| 21.3455 | 3.0 | 1401 | 7.8707 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
minjingzhu/bigbird-pegasus-large-pubmed-finetuned-legal-2 | minjingzhu | 2023-08-05T10:34:17Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bigbird_pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/bigbird-pegasus-large-pubmed",
"base_model:finetune:google/bigbird-pegasus-large-pubmed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-05T06:08:45Z | ---
license: apache-2.0
base_model: google/bigbird-pegasus-large-pubmed
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bigbird-pegasus-large-pubmed-finetuned-legal-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-pegasus-large-pubmed-finetuned-legal-2
This model is a fine-tuned version of [google/bigbird-pegasus-large-pubmed](https://huggingface.co/google/bigbird-pegasus-large-pubmed) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0932
- Rouge1: 35.0046
- Rouge2: 14.6481
- Rougel: 20.8387
- Rougelsum: 32.3484
- Gen Len: 245.06
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.3208 | 1.0 | 6176 | 3.0932 | 35.0046 | 14.6481 | 20.8387 | 32.3484 | 245.06 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
helamri/ppo-Huggy | helamri | 2023-08-05T10:29:33Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-08-05T10:29:28Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: helamri/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Gracoy/ingredients_compatibility_GPT2_S | Gracoy | 2023-08-05T09:55:45Z | 62 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-02T02:38:35Z | ---
tags:
- generated_from_keras_callback
model-index:
- name: ingredients_compatibility_GPT2_S
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ingredients_compatibility_GPT2_S
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9950
- Validation Loss: 1.0009
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.99, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9950 | 1.0009 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.3
- Tokenizers 0.13.3
|
YanJiangJerry/bertweet-large_epoch3_batch4_lr2e-05_w0.01 | YanJiangJerry | 2023-08-05T09:33:52Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-large",
"base_model:finetune:vinai/bertweet-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T08:56:35Z | ---
base_model: vinai/bertweet-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bertweet-large_epoch3_batch4_lr2e-05_w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-large_epoch3_batch4_lr2e-05_w0.01
This model is a fine-tuned version of [vinai/bertweet-large](https://huggingface.co/vinai/bertweet-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5167
- Accuracy: 0.9066
- F1: 0.8768
- Precision: 0.8617
- Recall: 0.8925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6423 | 1.0 | 788 | 0.4273 | 0.8966 | 0.8597 | 0.8689 | 0.8507 |
| 0.4072 | 2.0 | 1576 | 0.5435 | 0.8910 | 0.8600 | 0.8247 | 0.8985 |
| 0.2823 | 3.0 | 2364 | 0.5167 | 0.9066 | 0.8768 | 0.8617 | 0.8925 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
MattStammers/a2c-AntBullet | MattStammers | 2023-08-05T09:14:06Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-05T09:12:54Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1411.21 +/- 388.99
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gaodrew/git-base-pokemon | gaodrew | 2023-08-05T08:51:53Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-08-05T08:06:54Z | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0348
- Wer Score: 2.7147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.3601 | 4.17 | 50 | 4.5925 | 21.8560 |
| 2.4331 | 8.33 | 100 | 0.4978 | 15.2153 |
| 0.1504 | 12.5 | 150 | 0.0323 | 1.2062 |
| 0.0142 | 16.67 | 200 | 0.0288 | 3.0791 |
| 0.0039 | 20.83 | 250 | 0.0314 | 2.3619 |
| 0.0021 | 25.0 | 300 | 0.0327 | 2.6537 |
| 0.0016 | 29.17 | 350 | 0.0333 | 3.2049 |
| 0.0014 | 33.33 | 400 | 0.0344 | 2.9403 |
| 0.0012 | 37.5 | 450 | 0.0344 | 2.9624 |
| 0.0011 | 41.67 | 500 | 0.0345 | 2.8106 |
| 0.0011 | 45.83 | 550 | 0.0346 | 2.7393 |
| 0.0011 | 50.0 | 600 | 0.0348 | 2.7147 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
YanJiangJerry/bertweet-base_epoch3_batch4_lr2e-05_w0.01 | YanJiangJerry | 2023-08-05T08:50:19Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-05T08:43:07Z | ---
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bertweet-base_epoch3_batch4_lr2e-05_w0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base_epoch3_batch4_lr2e-05_w0.01
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5753
- Accuracy: 0.8687
- F1: 0.8275
- Precision: 0.8109
- Recall: 0.8448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5235 | 1.0 | 788 | 0.4170 | 0.8643 | 0.8076 | 0.8562 | 0.7642 |
| 0.3755 | 2.0 | 1576 | 0.5068 | 0.8699 | 0.8272 | 0.8187 | 0.8358 |
| 0.2978 | 3.0 | 2364 | 0.5753 | 0.8687 | 0.8275 | 0.8109 | 0.8448 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
ravenscroftj/CodeGen-2B-multi-ggml-quant | ravenscroftj | 2023-08-05T08:32:42Z | 0 | 2 | null | [
"ggml",
"quantized",
"text-generation",
"en",
"license:bsd-3-clause",
"region:us"
] | text-generation | 2023-04-23T11:18:46Z | ---
license: bsd-3-clause
language:
- en
pipeline_tag: text-generation
tags:
- ggml
- quantized
---
# Codegen 2B Multi GGML Quantized
This is Salesforce's Codegen 2B multi model ported to ggml and quantized to be executed via [turbopilot](https://github.com/ravenscroftj/turbopilot).
Please refer to the [turbopilot](https://github.com/ravenscroftj/turbopilot) project to learn more about this model.
**nb: this model is not directly compatible with llama.cpp. You will need to use [turbopilot](https://github.com/ravenscroftj/turbopilot) to run it** |
KevinC/ppo-Huggy | KevinC | 2023-08-05T08:28:00Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-08-05T08:27:50Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: KevinC/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Tverous/sft-trl-claim-ppo3 | Tverous | 2023-08-05T08:27:30Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-04T14:17:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Subsets and Splits