modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 12:29:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 548
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 12:29:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
YeungNLP/bloom-396m-zh
|
YeungNLP
| 2023-04-03T10:17:39Z | 150 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-27T15:22:39Z |
项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
```
|
YeungNLP/bloom-820m-zh
|
YeungNLP
| 2023-04-03T10:17:29Z | 144 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-28T09:03:20Z |
项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
```
|
YeungNLP/bloomz-1b4-zh
|
YeungNLP
| 2023-04-03T10:17:11Z | 145 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-03-28T09:26:37Z |
项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
```
|
YeungNLP/bloomz-2b6-zh
|
YeungNLP
| 2023-04-03T10:16:43Z | 148 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-03T03:28:22Z |
项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
```
|
YeungNLP/bloomz-6b4-zh
|
YeungNLP
| 2023-04-03T10:16:30Z | 18 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-03T05:09:10Z |
项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
```
|
YeungNLP/bloomz-6b4-mt-zh
|
YeungNLP
| 2023-04-03T10:16:20Z | 15 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-03T03:49:52Z |
项目地址:[LLMPruner:大语言模型裁剪工具](https://github.com/yangjianxin1/LLMPruner)
LLMPruner是一个大语言模型裁剪工具,通过对大语言模型的冗余词表进行裁剪,减少模型参数量,降低显存占用,提升训练速度,并且能够保留预训练中学习到的知识。
本项目对Bloom进行词表裁剪,保留中文token和常用的英文token,词表由250880将至46145,缩减为原来的18.39%。裁剪得到的Bloom模型如下表:
| 裁剪模型 | 原模型 | 参数量比例 |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|--------|
| [YeungNLP/bloom-396m-zh](https://huggingface.co/YeungNLP/bloom-396m-zh) | [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 70.96% |
| [YeungNLP/bloom-820m-zh](https://huggingface.co/YeungNLP/bloom-820m-zh) | [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) | 77.13% |
| [YeungNLP/bloom-1b4-zh](https://huggingface.co/YeungNLP/bloom-1b4-zh) | [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) | 81.14% |
| [YeungNLP/bloom-2b6-zh](https://huggingface.co/YeungNLP/bloom-2b6-zh) | [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 86.48% |
| [YeungNLP/bloom-6b4-zh](https://huggingface.co/YeungNLP/bloom-6b4-zh) | [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) | 90.81% |
| [YeungNLP/bloomz-396m-zh](https://huggingface.co/YeungNLP/bloomz-396m-zh) | [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) | 70.96% |
| [YeungNLP/bloomz-820m-zh](https://huggingface.co/YeungNLP/bloomz-820m-zh) | [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) | 77.13% |
| [YeungNLP/bloomz-1b4-zh](https://huggingface.co/YeungNLP/bloomz-1b4-zh) | [bigscience/bloomz-1b7](https://huggingface.co/bigscience/bloomz-1b7) | 81.14% |
| [YeungNLP/bloomz-2b6-zh](https://huggingface.co/YeungNLP/bloomz-2b6-zh) | [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) | 86.48% |
| [YeungNLP/bloomz-6b4-zh](https://huggingface.co/YeungNLP/bloomz-6b4-zh) | [bigscience/bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 90.81% |
| [YeungNLP/bloomz-6b4-mt-zh](https://huggingface.co/YeungNLP/bloomz-6b4-mt-zh) | [bigscience/bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 90.81% |
使用方法:
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('YeungNLP/bloom-1b4-zh')
model = BloomForCausalLM.from_pretrained('YeungNLP/bloom-1b4-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('长风破浪会有时', return_tensors='pt'))))
```
|
p1atdev/ANime
|
p1atdev
| 2023-04-03T10:13:08Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-04-03T09:57:19Z |
---
license: other
---
# ANime
Truly anime style model(s).
## LoRA
**Please use with [PVC v4](https://huggingface.co/p1atdev/pvc-v4)**, not vanilla WD.
- [anime-recoil](https://huggingface.co/p1atdev/ANime/blob/main/lora/anime-recoil.safetensors) LoRA
Trained with first episode of Lycoris Recoil. Use `anime style` to force apply anime style.
If you want chisato and takina, use 0.8~ weight, or only want the anime style, use 0.4~0.8 weight.

## License
These models are released under the Fair AI Public License 1.0-SD (https://freedevproject.org/faipl-1.0-sd/). If any derivative of this model is made, please share your changes accordingly. Special thanks to ronsor/undeleted (https://undeleted.ronsor.com/) for help with the license.
|
gelas/taxi
|
gelas
| 2023-04-03T10:00:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T10:00:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.64
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gelas/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yuekai/wenet_cpu_runtime_benchmark
|
yuekai
| 2023-04-03T09:57:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-04-03T04:48:22Z |
### Wenet ASR cpu runtime benchmark
AliCould Instance: ecs.c7.16xlarge 64 vCPU 128 GiB 2.7 GHz/3.5 GHz 32 Gbps, Intel Xeon(Ice Lake) Platinum 8369B
Offline ASR
Num Concurrent Tasks: 256 (Best among 16, 32, 64, 128, 256, 512)
|Model | Aishell1 Test WER (%) | 10 Hours Decoding Time (secs) |
|---|---|-------------------------------------|
|Aishell1_small_offline_fp32_ctc_prefix | 4.62 |74.56 |
|Aishell1_small_offline_int8_ctc_prefix | 4.79 |59.08 |
|Aishell1_small_offline_fp32_ctc_wfst | 4.38 | 90.91|
|Aishell1_small_offline_int8_ctc_wfst | 4.50 | 69.57 |
|WenetSpeech_large_offline_fp32_ctc_prefix | 4.61 |181.77 |
|WenetSpeech_large_offline_int8_ctc_prefix | 4.71 |125.57|
Streaming ASR
Num Concurrent Tasks: 256 (Best among 16, 32, 64, 128, 256, 512)
|Model | Aishell1 Test WER (%) | 10 Hours Decoding Time (secs) |
|---|---|-------------------------------------|
|Aishell1_small_u2pp_fp32_ctc_prefix | | 102.21 |
|Aishell1_small_u2pp_fp32_ctc_wfst | | 106.67 |
|WenetSpeech_large_u2pp_fp32_ctc_prefix | | 276.32 |
|
EExe/poca-SoccerTwos
|
EExe
| 2023-04-03T09:55:10Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-03T09:53:15Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: EExe/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kambehmw/rl_course_vizdoom_health_gathering_supreme
|
kambehmw
| 2023-04-03T09:39:15Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T09:38:59Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.39 +/- 3.51
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kambehmw/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
dvilasuero/alpaca-bad-instruction-detector
|
dvilasuero
| 2023-04-03T09:23:02Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:dvilasuero/autotrain-data-alpaca-bs-detector",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-03T07:44:18Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "INSTRUCTION:\nReview the given chart and find the outlier.\nINPUT:\nData Series A: 0, 5, 8, 10, 11, 10, 9\nOUTPUT:\nThe outlier of the given data series is 11, as it is numerically greater than the rest of the numbers in the series.\n"
datasets:
- dvilasuero/autotrain-data-alpaca-bs-detector
co2_eq_emissions:
emissions: 0.4102361717910936
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 46079114807
- CO2 Emissions (in grams): 0.4102
## Validation Metrics
- Loss: 0.305
- Accuracy: 0.891
- Macro F1: 0.887
- Micro F1: 0.891
- Weighted F1: 0.891
- Macro Precision: 0.890
- Micro Precision: 0.891
- Weighted Precision: 0.891
- Macro Recall: 0.885
- Micro Recall: 0.891
- Weighted Recall: 0.891
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/dvilasuero/autotrain-alpaca-bs-detector-46079114807
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dvilasuero/autotrain-alpaca-bs-detector-46079114807", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dvilasuero/autotrain-alpaca-bs-detector-46079114807", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ymmttks/danseibosyu
|
ymmttks
| 2023-04-03T09:15:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-04-01T11:37:40Z |
# danseibosyu
## Trigger Word
```
danseibosyu
```
## Sample
```prompt example
masterpiece, best quality, ultra-detailed, 1girl,danseibosyu<lora:danseibosyu:1>
```
<img src="https://huggingface.co/ymmttks/danseibosyu/resolve/main/samples/00151-2557695287.png" width="600">
<img src="https://huggingface.co/ymmttks/danseibosyu/resolve/main/samples/00020-3194616569.png" width="600">
## omake
<img src="https://huggingface.co/ymmttks/danseibosyu/resolve/main/samples/00015-3046191811.png" width="600">
<img src="https://huggingface.co/ymmttks/danseibosyu/resolve/main/samples/00006-1918596583.png" width="600">
|
sagu7/cartoondetection_sagnik
|
sagu7
| 2023-04-03T09:13:37Z | 236 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-06T08:08:42Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cartoondetection_sagnik
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9976562261581421
---
# cartoondetection_sagnik
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cartoon

#### person

|
Shawn286/git-base-pokemon
|
Shawn286
| 2023-04-03T08:59:50Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"git",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-04-03T08:13:42Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0345
- Wer Score: 2.4097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.3695 | 4.17 | 50 | 4.5700 | 21.4160 |
| 2.3984 | 8.33 | 100 | 0.4696 | 10.9249 |
| 0.1439 | 12.5 | 150 | 0.0305 | 1.1692 |
| 0.02 | 16.67 | 200 | 0.0263 | 1.5229 |
| 0.0084 | 20.83 | 250 | 0.0295 | 2.6539 |
| 0.003 | 25.0 | 300 | 0.0324 | 3.2125 |
| 0.0018 | 29.17 | 350 | 0.0329 | 2.6628 |
| 0.0014 | 33.33 | 400 | 0.0336 | 2.5407 |
| 0.0013 | 37.5 | 450 | 0.0338 | 2.4008 |
| 0.0011 | 41.67 | 500 | 0.0344 | 2.5115 |
| 0.0011 | 45.83 | 550 | 0.0344 | 2.3766 |
| 0.0011 | 50.0 | 600 | 0.0345 | 2.4097 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
celtics1863/env-policy-cls-bert
|
celtics1863
| 2023-04-03T08:43:58Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-04-03T08:41:53Z |
---
license: apache-2.0
language:
- zh
---
15类政策分类:
['环境统计与总量控制',
'环评与许可证',
'环境监测管理',
'海洋环境管理',
'生态环境执法',
'科技与合作',
'辐射管理',
'水环境管理',
'固废及化学品管理',
'热线与应急管理',
'长三角一体化环境合作',
'自然生态',
'规划与计划',
'土壤环境管理',
'大气环境管理']
Top1 acc:
0.936
Top3 acc:
0.993
|
PJHinAI/sentiment-analysis-using-steam-data
|
PJHinAI
| 2023-04-03T08:07:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-29T02:57:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: activelearning-sentiment-model-using-steam-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# activelearning-sentiment-model-using-steam-data
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2861
- Accuacy: 0.8470
- F1: 0.8467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
keonju/gpt-j-fin
|
keonju
| 2023-04-03T08:05:15Z | 0 | 0 | null |
[
"pytorch",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-03-23T18:04:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-j-fin
results: []
---
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("keonju/gpt-j-fin")
model = AutoModelForCausalLM.from_pretrained("keonju/gpt-j-fin")
```
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-j-fin
This model is a fine-tuned version of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
BlackKakapo/opus-mt-fi-ro
|
BlackKakapo
| 2023-04-03T07:59:33Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"ro",
"fi",
"dataset:yhavinga/ccmatrix",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-03-31T20:17:02Z |
---
language:
- ro
- fi
tags:
- translation
- text2text-generation
license: apache-2.0
datasets:
- yhavinga/ccmatrix
pipeline_tag: translation
library_name: transformers
---
# Finnish-Romanian Translate

# Finetune
This model is a finetune of the Helsinki-NLP/opus-mt-fi-ro model, on 2 million records.
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/opus-mt-fi-ro")
model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/opus-mt-fi-ro")
```
|
celtics1863/env-topic-albert
|
celtics1863
| 2023-04-03T07:49:11Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-04-03T07:40:20Z |
---
license: apache-2.0
language:
- zh
---
话题分类模型,使用某乎"环境"话题下所有子话题,过滤后得69类。
top1 acc 54.3,
top3 acc 80.6,
可以用于中文环境文本挖掘的预处理步骤。
albert模型,模型小,推理速度快。
标签:
"生态环境","水污染", "野生动物保护", "太阳能", "环保经济", "污水处理", "绿色建筑", "水处理", "噪音污染", "温室效应", "净水设备",
"净水器", "自来水", "生活", "环境评估", "空气污染", "环境评价", "工业污染", "雾霾", "植树", "环保行业", "水处理工程", "沙漠治理",
"巴黎协定", "核能", "噪音", "环评工程师", "二氧化碳", "低碳", "自然环境", "沙尘暴", "环境工程", "秸秆焚烧", "PM 2.5", "太空垃圾",
"穹顶之下(纪录片)", "垃圾", "环境科学", "净水", "污水排放", "室内空气污染", "环境污染", "全球变暖", "邻居噪音", "土壤污染", "生物多样性",
"碳交易", "污染治理", "雾霾治理", "碳金融", "建筑节能", "风能及风力发电", "温室气体", "环境保护", "碳排放", "垃圾处理器", "气候变化", "化学污染",
"地球一小时", "环保组织", "物种多样性", "节能减排", "核污染", "环保督查", "垃圾处理", "垃圾分类", "重金属污染", "环境伦理学", "垃圾焚烧"
|
terzimert/bert-finetuned-ner-v4.008
|
terzimert
| 2023-04-03T07:42:38Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:caner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-03T07:20:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- caner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-v4.008
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: caner
type: caner
config: default
split: train[56%:57%]
args: default
metrics:
- name: Precision
type: precision
value: 0.8976470588235295
- name: Recall
type: recall
value: 0.8430939226519337
- name: F1
type: f1
value: 0.8695156695156695
- name: Accuracy
type: accuracy
value: 0.8992103075644223
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-v4.008
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the caner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8089
- Precision: 0.8976
- Recall: 0.8431
- F1: 0.8695
- Accuracy: 0.8992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2406 | 1.0 | 3228 | 0.6527 | 0.8627 | 0.8265 | 0.8442 | 0.8838 |
| 0.1618 | 2.0 | 6456 | 0.7268 | 0.8988 | 0.8243 | 0.8599 | 0.8982 |
| 0.1087 | 3.0 | 9684 | 0.8089 | 0.8976 | 0.8431 | 0.8695 | 0.8992 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
arjun-test-gpt/Dog-gpt
|
arjun-test-gpt
| 2023-04-03T07:41:47Z | 0 | 0 |
tensorflowtts
|
[
"tensorflowtts",
"en",
"ml",
"ta",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:gsdf/EasyNegative",
"dataset:Nerfgun3/bad_prompt",
"dataset:tatsu-lab/alpaca",
"dataset:allenai/objaverse",
"license:openrail",
"region:us"
] | null | 2023-04-03T07:39:56Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
- gsdf/EasyNegative
- Nerfgun3/bad_prompt
- tatsu-lab/alpaca
- allenai/objaverse
language:
- en
- ml
- ta
metrics:
- accuracy
- character
library_name: tensorflowtts
---
|
songyizhao/q-taxi-v3
|
songyizhao
| 2023-04-03T07:35:04Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T07:35:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="songyizhao/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rarerambler/segformer-b0-scene-parse-150
|
rarerambler
| 2023-04-03T07:30:40Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-04-03T07:17:54Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Mihara-bot/dqn-SpaceInvadersNoFrameskip-v4
|
Mihara-bot
| 2023-04-03T07:23:02Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T07:22:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 374.00 +/- 214.89
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mihara-bot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Mihara-bot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Mihara-bot
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
pregonas/rl_course_vizdoom_health_gathering_supreme
|
pregonas
| 2023-04-03T07:18:19Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T07:18:11Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.83 +/- 5.75
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r pregonas/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
AryaParikh/autotrain-summ_arp_2-46098114797
|
AryaParikh
| 2023-04-03T07:14:00Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Hinataaa/autotrain-data-summ_arp_2",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-04-03T07:08:28Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hinataaa/autotrain-data-summ_arp_2
co2_eq_emissions:
emissions: 2.584620959475704
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 46098114797
- CO2 Emissions (in grams): 2.5846
## Validation Metrics
- Loss: 0.914
- Rouge1: 55.361
- Rouge2: 27.454
- RougeL: 47.968
- RougeLsum: 47.978
- Gen Len: 13.540
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-summ_arp_2-46098114797
```
|
ShubhamSP/nd_pegasus_bigpatent_cnn_xsum_model
|
ShubhamSP
| 2023-04-03T07:07:54Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:big_patent",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-03T06:53:33Z |
---
tags:
- generated_from_trainer
datasets:
- big_patent
metrics:
- rouge
model-index:
- name: nd_pegasus_bigpatent_cnn_xsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: big_patent
type: big_patent
config: d
split: train[:200]
args: d
metrics:
- name: Rouge1
type: rouge
value: 0.3465
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nd_pegasus_bigpatent_cnn_xsum_model
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1037
- Rouge1: 0.3465
- Rouge2: 0.1181
- Rougel: 0.2258
- Rougelsum: 0.227
- Gen Len: 85.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.5734 | 1.0 | 80 | 3.1804 | 0.3468 | 0.1231 | 0.2262 | 0.2268 | 89.95 |
| 3.3146 | 2.0 | 160 | 3.1037 | 0.3465 | 0.1181 | 0.2258 | 0.227 | 85.75 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
datasciencetony/xmasv1
|
datasciencetony
| 2023-04-03T07:04:56Z | 29 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-03T06:58:02Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### xmasv1 on Stable Diffusion via Dreambooth
#### model by datasciencetony
This your the Stable Diffusion model fine-tuned the xmasv1 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **house with xmas lights**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:






|
songyizhao/q-FrozenLake-v1-4x4-noSlippery
|
songyizhao
| 2023-04-03T07:03:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T07:03:54Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="songyizhao/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
qossain/finetuning-sentiment-model-3000-samples
|
qossain
| 2023-04-03T07:03:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-03T06:56:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6256
- Accuracy: 0.8667
- F1: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
chromefan/gmater-ch-cartoon-ad-lora
|
chromefan
| 2023-04-03T06:51:34Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-04-03T06:30:39Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: <ad images cartoon advertisement with Chinese national trend style>
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - chromefan/gmater-ch-cartoon-ad-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on <ad images cartoon advertisement with Chinese national trend style> using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




|
silver/chatglm-6b-int4-slim
|
silver
| 2023-04-03T06:39:59Z | 169 | 39 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"glm",
"thudm",
"custom_code",
"zh",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-03-20T05:39:44Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM-6B-INT4-Slim: 低显存版ChatGLM-6B-INT4
## 介绍
ChatGLM-6B-INT4-Slim是在[ChatGLM-6B-INT4](https://huggingface.co/THUDM/chatglm-6b-int4)的基础上通过裁剪词表构建的。因为ChatGLM-6B使用了icetk,在其词表中,前20000个token是预留给图片的,在文本模型中没有用到这些图片token,但是在infer和微调的时候,这些token对应的embedding依然需要被加载,并且在解码每一个token的时候需要多计算20K个logits,会占用不少显存。因此将这一部分token裁剪掉以节省显存。
除了词表外,ChatGLM-6B-INT4-Slim的其他结构与ChatGLM-6B-INT4完全一致,性能也完全一样,可以认为是ChatGLM-6B-INT4的一个低显存版等价平替。
ChatGLM-6B 是一个开源的、支持中英双语问答的对话语言模型,基于 [General Language Model (GLM)](https://github.com/THUDM/GLM) 架构,具有 62 亿参数。结合模型量化技术,用户可以在消费级的显卡上进行本地部署(INT4 量化级别下最低只需 6GB 显存)。ChatGLM-6B 使用了和 [ChatGLM](https://chatglm.cn) 相同的技术,针对中文问答和对话进行了优化。经过约 1T 标识符的中英双语训练,辅以监督微调、反馈自助、人类反馈强化学习等技术的加持,62 亿参数的 ChatGLM-6B 已经能生成相当符合人类偏好的回答。
ChatGLM-6B-INT4 是 ChatGLM-6B 量化后的模型权重。具体的,ChatGLM-6B-INT4 对 ChatGLM-6B 中的 28 个 GLM Block 进行了 INT4 量化,没有对 Embedding 和 LM Head 进行量化。量化后的模型理论上 6G 显存(使用 CPU 即内存)即可推理,具有在嵌入式设备(如树莓派)上运行的可能。
在 CPU 上运行时,会根据硬件自动编译 CPU Kernel ,请确保已安装 GCC 和 OpenMP (Linux一般已安装,对于Windows则需手动安装),以获得最佳并行计算能力。
模型所有版权和Credit归ChatGLM官方团队所有,ChatGLM-6B-INT4-Slim只是为了方便大家使用而制作。
## 软件依赖
```shell
pip install protobuf==3.20.0 transformers==4.26.1 icetk cpm_kernels
```
## 代码调用
可以通过如下代码调用 ChatGLM-6B 模型来生成对话:
```ipython
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("silver/chatglm-6b-int4-slim", trust_remote_code=True)
>>> model = AutoModel.from_pretrained("silver/chatglm-6b-int4-slim", trust_remote_code=True).half().cuda()
>>> response, history = model.chat(tokenizer, "你好", history=[])
>>> print(response)
你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
>>> response, history = model.chat(tokenizer, "晚上睡不着应该怎么办", history=history)
>>> print(response)
晚上睡不着可能会让你感到焦虑或不舒服,但以下是一些可以帮助你入睡的方法:
1. 制定规律的睡眠时间表:保持规律的睡眠时间表可以帮助你建立健康的睡眠习惯,使你更容易入睡。尽量在每天的相同时间上床,并在同一时间起床。
2. 创造一个舒适的睡眠环境:确保睡眠环境舒适,安静,黑暗且温度适宜。可以使用舒适的床上用品,并保持房间通风。
3. 放松身心:在睡前做些放松的活动,例如泡个热水澡,听些轻柔的音乐,阅读一些有趣的书籍等,有助于缓解紧张和焦虑,使你更容易入睡。
4. 避免饮用含有咖啡因的饮料:咖啡因是一种刺激性物质,会影响你的睡眠质量。尽量避免在睡前饮用含有咖啡因的饮料,例如咖啡,茶和可乐。
5. 避免在床上做与睡眠无关的事情:在床上做些与睡眠无关的事情,例如看电影,玩游戏或工作等,可能会干扰你的睡眠。
6. 尝试呼吸技巧:深呼吸是一种放松技巧,可以帮助你缓解紧张和焦虑,使你更容易入睡。试着慢慢吸气,保持几秒钟,然后缓慢呼气。
如果这些方法无法帮助你入睡,你可以考虑咨询医生或睡眠专家,寻求进一步的建议。
```
关于更多的使用说明,包括如何运行命令行和网页版本的 DEMO,以及使用模型量化以节省显存,请参考 [Github Repo](https://github.com/THUDM/ChatGLM-6B)。
## 协议
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源,ChatGLM-6B 模型的权重的使用则需要遵循 [Model License](MODEL_LICENSE)。
## 引用
如果你觉得这个工作有帮助的话,请考虑引用ChatGLM官方团队的论文:
```
@inproceedings{
zeng2023glm-130b,
title={{GLM}-130B: An Open Bilingual Pre-trained Model},
author={Aohan Zeng and Xiao Liu and Zhengxiao Du and Zihan Wang and Hanyu Lai and Ming Ding and Zhuoyi Yang and Yifan Xu and Wendi Zheng and Xiao Xia and Weng Lam Tam and Zixuan Ma and Yufei Xue and Jidong Zhai and Wenguang Chen and Zhiyuan Liu and Peng Zhang and Yuxiao Dong and Jie Tang},
booktitle={The Eleventh International Conference on Learning Representations (ICLR)},
year={2023},
url={https://openreview.net/forum?id=-Aw0rrrPUF}
}
```
```
@inproceedings{du2022glm,
title={GLM: General Language Model Pretraining with Autoregressive Blank Infilling},
author={Du, Zhengxiao and Qian, Yujie and Liu, Xiao and Ding, Ming and Qiu, Jiezhong and Yang, Zhilin and Tang, Jie},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={320--335},
year={2022}
}
```
|
alibaba-pai/pai-diffusion-general-large-zh-controlnet-depth
|
alibaba-pai
| 2023-04-03T06:28:22Z | 1 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"text-to-image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-04-03T03:32:09Z |
---
license: apache-2.0
tags:
- pytorch
- diffusers
- text-to-image
---
# Chinese Latent Diffusion Model
我们开源了适配模型 `alibaba-pai/pai-diffusion-general-large-zh` 的 ControlNet,输入图像的 Depth 特征,进行可控的生成。
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
```python
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
controlnet_id = "alibaba-pai/pai-diffusion-general-large-zh-controlnet-depth"
controlnet = ControlNetModel.from_pretrained(controlnet_id)
model_id = "alibaba-pai/pai-diffusion-general-large-zh"
pipe = StableDiffusionControlNetPipeline.from_pretrained(model_id, controlnet=controlnet)
pipe = pipe.to("cuda")
image = Image.open("depth_image.png")
prompt = "雪地上的帐篷"
image = pipe(prompt, image).images[0]
image.save("result.png")
```
|
ghdi/imbd-reviews-sample
|
ghdi
| 2023-04-03T06:18:50Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-02T12:12:15Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ghdi/imbd-reviews-sample
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ghdi/imbd-reviews-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.9326
- Validation Loss: 6.3691
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.4384 | 7.2924 | 0 |
| 7.0231 | 6.9971 | 1 |
| 6.7445 | 6.7865 | 2 |
| 6.5201 | 6.6116 | 3 |
| 6.2942 | 6.4619 | 4 |
| 6.0867 | 6.3691 | 5 |
| 5.9325 | 6.3691 | 6 |
| 5.9331 | 6.3691 | 7 |
| 5.9327 | 6.3691 | 8 |
| 5.9318 | 6.3691 | 9 |
| 5.9309 | 6.3691 | 10 |
| 5.9304 | 6.3691 | 11 |
| 5.9312 | 6.3691 | 12 |
| 5.9339 | 6.3691 | 13 |
| 5.9322 | 6.3691 | 14 |
| 5.9351 | 6.3691 | 15 |
| 5.9311 | 6.3691 | 16 |
| 5.9328 | 6.3691 | 17 |
| 5.9307 | 6.3691 | 18 |
| 5.9326 | 6.3691 | 19 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Grigsss/pignfts
|
Grigsss
| 2023-04-03T05:36:50Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-02T20:12:38Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: pigNFTs
---
### pigNFTs Dreambooth model trained by Grigsss with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
pigNFTs (use that on your prompt)

|
LarryAIDraw/tohsakaRinFateStay_v1
|
LarryAIDraw
| 2023-04-03T05:35:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-30T16:16:43Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/26096/tohsaka-rin-fate-stay-night
|
smjain/flan-jain-xl
|
smjain
| 2023-04-03T04:46:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-02T23:31:10Z |
---
license: apache-2.0
language:
- en
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a a fine tuning of google flan-xl model and can be used for QA (mainly incontext QA)
- **Developed by:** [shashank]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
The model is trained on 4 A4000 . Took around 9 hours to fine tune
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
intanm/clm-20230403-001-1
|
intanm
| 2023-04-03T04:44:28Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-03T04:23:49Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: clm-20230403-001-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clm-20230403-001-1
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2654 | 1.0 | 920 | 0.0014 |
| 0.0018 | 2.0 | 1840 | 0.0007 |
| 0.0009 | 3.0 | 2760 | 0.0005 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
seongwoon/Bert_labor_space_token_512_batch_8
|
seongwoon
| 2023-04-03T04:41:41Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-03-20T12:17:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: labor_space_distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# labor_space_distilbert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
|
steren/deepwater
|
steren
| 2023-04-03T04:23:29Z | 0 | 3 | null |
[
"image-to-image",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-04-02T22:36:17Z |
---
license: apache-2.0
pipeline_tag: image-to-image
---
Deep-learning based enhancer for underwater pictures
Model by [Anne Menini](https://github.com/annemenini)
* Live demo: https://deepwater-project.web.app
* Blog post: https://labs.steren.fr/2019/deepwater
* Source code: https://github.com/annemenini/deepwater
|
nobtunotnobutno/Dislyte-LORA
|
nobtunotnobutno
| 2023-04-03T04:15:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-03T04:10:03Z |
---
license: creativeml-openrail-m
---
|
adhisetiawan/a2c-AntBulletEnv-v0
|
adhisetiawan
| 2023-04-03T04:06:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T04:05:06Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1598.88 +/- 91.40
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TheLoveone/Thai_LORA
|
TheLoveone
| 2023-04-03T04:00:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-03T03:57:14Z |
---
license: creativeml-openrail-m
---
|
srimoyee12/my_awesome_model
|
srimoyee12
| 2023-04-03T03:46:19Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-29T03:37:19Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: srimoyee12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# srimoyee12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [Auditor Review Dataset](https://huggingface.co/datasets/demo-org/auditor_review).
It achieves the following results on the evaluation set:
- Train Loss: 0.1735
- Validation Loss: 0.3834
- Train Accuracy: 0.8524
- Epoch: 3
## Model description
This is a simple classifier model based on DistilBERT. It classifies given data into Negative, Neutral or Positive based on the sentiment.
## Intended uses & limitations
Can be used for text classification.
This is created for illustration purposes and might not have the highest accuracy.
## Training and evaluation data
Default split from the [dataset card](https://huggingface.co/datasets/demo-org/auditor_review)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5919 | 0.4004 | 0.8359 | 0 |
| 0.2881 | 0.3590 | 0.8473 | 1 |
| 0.1735 | 0.3834 | 0.8524 | 2 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
yumingyi/poca-SoccerTwos-v3-15M
|
yumingyi
| 2023-04-03T03:34:38Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-03T03:34:19Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: yumingyi/poca-SoccerTwos-v3-15M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
romeromuerto/q-FrozenLake-v1
|
romeromuerto
| 2023-04-03T03:19:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T03:12:51Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="romeromuerto/q-FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
msthil/Reinforce-pixel-copter-unit4-v1
|
msthil
| 2023-04-03T03:16:15Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T02:19:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixel-copter-unit4-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 30.50 +/- 18.18
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
arkadyark/q-Taxi-v3
|
arkadyark
| 2023-04-03T02:49:56Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T02:48:23Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="arkadyark/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_name"])
```
|
GraydientPlatformAPI/model_124
|
GraydientPlatformAPI
| 2023-04-03T02:44:04Z | 30 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-02T13:16:30Z |
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
arkadyark/q-FrozenLake-v1
|
arkadyark
| 2023-04-03T02:39:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T02:28:34Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1-4x4-no_slippery**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1-4x4-no_slippery** .
## Usage
```python
model = load_from_hub(repo_id="arkadyark/q-FrozenLake-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_name"])
```
|
saif-daoud/whisper-small-hi-2400_500_140
|
saif-daoud
| 2023-04-03T02:37:04Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:afrispeech-200",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-04-03T01:05:21Z |
---
tags:
- generated_from_trainer
datasets:
- afrispeech-200
metrics:
- wer
model-index:
- name: whisper-small-hi-2400_500_140
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: afrispeech-200
type: afrispeech-200
config: hausa
split: train
args: hausa
metrics:
- name: Wer
type: wer
value: 0.31382914814369817
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-2400_500_140
This model is a fine-tuned version of [saif-daoud/whisper-small-hi-2400_500_135](https://huggingface.co/saif-daoud/whisper-small-hi-2400_500_135) on the afrispeech-200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7471
- Wer: 0.3138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 1362
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7048 | 0.5 | 681 | 0.7493 | 0.3059 |
| 0.6682 | 1.5 | 1362 | 0.7471 | 0.3138 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
zhuqi/PPO_LunarLander-v2_steps10M
|
zhuqi
| 2023-04-03T02:04:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-03T01:48:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.96 +/- 22.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Training
```python
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
env = make_vec_env("LunarLander-v2", n_envs=16)
model = PPO('MlpPolicy',
env=env,
n_steps=1024,
batch_size=64,
n_epochs=4,
gamma=0.999,
gae_lambda=0.98,
ent_coef=0.01,
verbose=1)
model.learn(total_timesteps=10000000, progress_bar=True)
```
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
repo_id = "zhuqi/PPO_LunarLander-v2_steps10M" # The repo_id
filename = "PPO_LunarLander-v2_steps10000000.zip" # The model filename.zip
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
```
|
saif-daoud/whisper-small-hi-2400_500_136
|
saif-daoud
| 2023-04-03T01:04:06Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:afrispeech-200",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-04-02T23:28:28Z |
---
tags:
- generated_from_trainer
datasets:
- afrispeech-200
metrics:
- wer
model-index:
- name: whisper-small-hi-2400_500_136
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: afrispeech-200
type: afrispeech-200
config: hausa
split: train
args: hausa
metrics:
- name: Wer
type: wer
value: 0.31118587047939444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-2400_500_136
This model is a fine-tuned version of [saif-daoud/whisper-small-hi-2400_500_135](https://huggingface.co/saif-daoud/whisper-small-hi-2400_500_135) on the afrispeech-200 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7547
- Wer: 0.3112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 1362
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7162 | 0.5 | 681 | 0.7551 | 0.3118 |
| 0.7109 | 1.5 | 1362 | 0.7547 | 0.3112 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
GraydientPlatformAPI/model_125
|
GraydientPlatformAPI
| 2023-04-03T01:02:14Z | 29 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-04-03T00:46:41Z |
---
license: openrail
library_name: diffusers
pipeline_tag: text-to-image
---
|
mustapha/e5-small-Quran
|
mustapha
| 2023-04-03T00:01:14Z | 110 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"dataset:M-AI-C/quran-en-tafssirs",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-04-02T16:20:52Z |
---
license: mit
datasets:
- M-AI-C/quran-en-tafssirs
language:
- en
---
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: Who is prophet known for patience',
'query: Who is moses',
"passage: passage 1",
"passage: passage 2"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-small')
model = AutoModel.from_pretrained('intfloat/e5-small')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
|
hussamalafandi/dqn-SpaceInvadersNoFrameskip-v4
|
hussamalafandi
| 2023-04-02T23:56:31Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T23:55:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 568.00 +/- 157.90
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hussamalafandi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hussamalafandi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hussamalafandi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 3000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
tenich/Reinforce-PixelCopter
|
tenich
| 2023-04-02T23:52:29Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T17:08:07Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 43.10 +/- 29.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jluckyboyj/xlm-roberta-large-finetuned-augument-visquad2-2-4-2023-3
|
jluckyboyj
| 2023-04-02T23:42:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-04-02T07:45:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-large-finetuned-augument-visquad2-2-4-2023-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-augument-visquad2-2-4-2023-3
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Best F1: 76.3263
- Loss: 2.9101
- Exact: 41.0887
- F1: 58.6813
- Total: 3821
- Hasans Exact: 56.0498
- Hasans F1: 81.3876
- Hasans Total: 2653
- Noans Exact: 7.1062
- Noans F1: 7.1062
- Noans Total: 1168
- Best Exact: 60.3769
- Best Exact Thresh: 0.7798
- Best F1 Thresh: 0.9874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Best F1 | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Noans Exact | Noans F1 | Noans Total | Best Exact | Best Exact Thresh | Best F1 Thresh |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:-----------:|:--------:|:-----------:|:----------:|:-----------------:|:--------------:|
| 0.9242 | 1.0 | 2807 | 69.6410 | 1.0239 | 37.3201 | 55.1119 | 3821 | 53.7505 | 79.3752 | 2653 | 0.0 | 0.0 | 1168 | 55.0118 | 0.8222 | 0.8968 |
| 0.3756 | 2.0 | 5615 | 73.7526 | 1.0092 | 38.8642 | 55.8953 | 3821 | 55.9744 | 80.5035 | 2653 | 0.0 | 0.0 | 1168 | 59.4085 | 0.9128 | 0.9611 |
| 0.2595 | 3.0 | 8423 | 75.1395 | 1.0121 | 39.7278 | 56.5553 | 3821 | 57.1806 | 81.4165 | 2653 | 0.0856 | 0.0856 | 1168 | 60.6386 | 0.8138 | 0.9174 |
| 0.185 | 4.0 | 11231 | 75.2011 | 1.2309 | 39.2306 | 56.7010 | 3821 | 56.2005 | 81.3625 | 2653 | 0.6849 | 0.6849 | 1168 | 59.7749 | 0.7215 | 0.8729 |
| 0.1336 | 5.0 | 14038 | 75.0330 | 1.4052 | 38.4454 | 56.1488 | 3821 | 55.2582 | 80.7556 | 2653 | 0.2568 | 0.2568 | 1168 | 59.4085 | 0.6660 | 0.8646 |
| 0.0976 | 6.0 | 16846 | 75.4976 | 1.6109 | 38.5763 | 56.1952 | 3821 | 55.4467 | 80.8224 | 2653 | 0.2568 | 0.2568 | 1168 | 59.8534 | 0.6631 | 0.9605 |
| 0.072 | 7.0 | 19654 | 76.0690 | 1.9673 | 39.5970 | 56.9041 | 3821 | 56.0874 | 81.0142 | 2653 | 2.1404 | 2.1404 | 1168 | 60.5862 | 0.7197 | 0.9882 |
| 0.0526 | 8.0 | 22462 | 75.3652 | 2.2945 | 38.8903 | 56.5382 | 3821 | 55.3336 | 80.7511 | 2653 | 1.5411 | 1.5411 | 1168 | 59.8273 | 0.6659 | 0.9573 |
| 0.0389 | 9.0 | 25269 | 76.0674 | 2.6609 | 42.5281 | 59.8494 | 3821 | 56.0121 | 80.9591 | 2653 | 11.9007 | 11.9007 | 1168 | 60.4292 | 0.6494 | 0.9632 |
| 0.0291 | 10.0 | 28070 | 76.3263 | 2.9101 | 41.0887 | 58.6813 | 3821 | 56.0498 | 81.3876 | 2653 | 7.1062 | 7.1062 | 1168 | 60.3769 | 0.7798 | 0.9874 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
globophobe/ppo-LunarLander-v2
|
globophobe
| 2023-04-02T23:34:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T23:34:15Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.55 +/- 18.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
steren/nerf-wood
|
steren
| 2023-04-02T23:17:42Z | 0 | 0 | null |
[
"dataset:steren/wooden-sculpture",
"license:cc-by-4.0",
"region:us"
] | null | 2023-04-02T22:59:48Z |
---
license: cc-by-4.0
datasets:
- steren/wooden-sculpture
---
NeRF of a wooden sculpture.
Download images from [dataset](https://huggingface.co/datasets/steren/wooden-sculpture) into an `images` folder.
Run with [instant-ngp](https://github.com/NVlabs/instant-ngp)
|
spolisar/ppo-LunarLander-v2
|
spolisar
| 2023-04-02T23:04:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T23:04:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.28 +/- 19.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asenella/reproducing_mmvae_3
|
asenella
| 2023-04-02T21:56:58Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-04-02T21:56:55Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
yshen99/ZhiGuoLiZheng-GPT2
|
yshen99
| 2023-04-02T21:43:03Z | 536 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-12-14T16:54:36Z |
---
license: mit
widget:
- text: "要进一步加强党风廉政建设"
example_title: "example 1"
- text: "要落实全面建成"
example_title: "example 2"
---
GPT2 model fine-tuned with Chinese political text.
|
billster45/autotrain-cat_dog-46040114726
|
billster45
| 2023-04-02T20:57:09Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:billster45/autotrain-data-cat_dog",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-04-02T20:54:31Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- billster45/autotrain-data-cat_dog
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.094614881827817
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 46040114726
- CO2 Emissions (in grams): 1.0946
## Validation Metrics
- Loss: 0.001
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
bjarlestam/Reinforce-Pixelcopter-PLE-v0-2
|
bjarlestam
| 2023-04-02T20:50:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T20:50:26Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 12.60 +/- 12.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zee2221/ai_me
|
zee2221
| 2023-04-02T20:42:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"code",
"medical",
"chemistry",
"biology",
"music",
"art",
"legal",
"text-generation-inference",
"finance",
"not-for-all-eyes",
"feature-extraction",
"en",
"ur",
"it",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:gsdf/EasyNegative",
"dataset:Nerfgun3/bad_prompt",
"dataset:tatsu-lab/alpaca",
"dataset:stanfordnlp/SHP",
"dataset:yizhongw/self_instruct",
"dataset:nyanko7/LLaMA-65B",
"dataset:laion/OIG",
"dataset:Anthropic/hh-rlhf",
"dataset:SirNeural/flan_v2",
"license:openrail",
"region:us"
] |
feature-extraction
| 2023-03-24T00:30:38Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
- gsdf/EasyNegative
- Nerfgun3/bad_prompt
- tatsu-lab/alpaca
- stanfordnlp/SHP
- yizhongw/self_instruct
- nyanko7/LLaMA-65B
- laion/OIG
- Anthropic/hh-rlhf
- SirNeural/flan_v2
language:
- en
- ur
- it
metrics:
- accuracy
- code_eval
- bertscore
- bleurt
- brier_score
- cer
- character
- charcut_mt
- chrf
library_name: diffusers
pipeline_tag: feature-extraction
tags:
- code
- medical
- chemistry
- biology
- music
- art
- legal
- text-generation-inference
- finance
- not-for-all-eyes
---
|
Hristo/unit3_model
|
Hristo
| 2023-04-02T20:39:35Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T18:40:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 662.00 +/- 263.50
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Hristo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Hristo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Hristo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 1000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
msthil/CartPole-Unit4-v2
|
msthil
| 2023-04-02T20:26:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T20:25:52Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-Unit4-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ARandomFrenchDev/poca-SoccerTwos
|
ARandomFrenchDev
| 2023-04-02T20:25:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-02T20:23:05Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: ARandomFrenchDev/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Bahasalab/BahasaGPT-1_int8
|
Bahasalab
| 2023-04-02T20:17:38Z | 1 | 0 |
transformers
|
[
"transformers",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] | null | 2023-04-02T20:14:03Z |
---
license: bigscience-bloom-rail-1.0
---
# BahasaGPT-1 Fine-Tuning Documentation Summary (INT (8-BIT))
## Introduction
This document provides an overview of the BahasaGPT-1 model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 70,000 Indonesian instructions.
## Model Details
**Model Name:** BahasaGPT-1
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 70k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- Translated instructions from OA ([Anh/data at main · LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 70,000 Indonesian instructions, which were generated using the Alpaca method from Stanford and translated instructions from OA. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
1. **Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
2. **Repeated Tokens:** The model occasionally produces repeated tokens in the output, which may affect the overall coherence and readability of the generated text.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 70,000 Indonesian instructions generated using the Alpaca method and translated instructions from OA. Despite some limitations, such as occasional hallucination and repeated tokens, the model provides a valuable tool for working with Indonesian language tasks.
|
jerka/taxi_v3
|
jerka
| 2023-04-02T20:13:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T20:13:43Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jerka/taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
cleth/Reinforce-Pixelcopter-PLE-v0
|
cleth
| 2023-04-02T20:11:08Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T16:54:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.30 +/- 33.81
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
indiaj27/my_awesome_model
|
indiaj27
| 2023-04-02T20:01:13Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-02T18:36:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9328
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2304
- Accuracy: 0.9328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2312 | 1.0 | 1563 | 0.1898 | 0.9276 |
| 0.1522 | 2.0 | 3126 | 0.2304 | 0.9328 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
mariav/helsinki-opus-de-en-fine-tuned-wmt16
|
mariav
| 2023-04-02T19:52:17Z | 71 | 1 |
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"translation",
"de",
"en",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-03-30T10:09:32Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mariav/helsinki-opus-de-en-fine-tuned-wmt16
results: []
datasets:
- wmt16
language:
- de
- en
metrics:
- bleu
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mariav/helsinki-opus-de-en-fine-tuned-wmt16
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16.
It achieves the following results on the evaluation set:
- Train Loss: 1.0077
- Validation Loss: 1.4381
- Epoch: 4
## Model description
This model is a fine-tuned version of Helsinki-NLP/opus-mt-de-en with the dataset wmt16 for the pair of languages german-english.
A tutorial for this task is available in the files.
## Intended uses & limitations
Limitations: scholar use.
## Training and evaluation data
Training done with keras from Transformers.
Evaluation with Bleu score.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1245, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5115 | 1.4061 | 0 |
| 1.2931 | 1.4111 | 1 |
| 1.1590 | 1.4200 | 2 |
| 1.0644 | 1.4324 | 3 |
| 1.0077 | 1.4381 | 4 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|
dvilasuero/alpaca-gigo-detector-setfit
|
dvilasuero
| 2023-04-02T19:49:27Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-04-02T19:49:18Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dvilasuero/alpaca-gigo-detector-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dvilasuero/alpaca-gigo-detector-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
artursal/layoutlmv3-finetuned-cord_100
|
artursal
| 2023-04-02T19:45:42Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-02T18:55:43Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9295774647887324
- name: Recall
type: recall
value: 0.938622754491018
- name: F1
type: f1
value: 0.9340782122905028
- name: Accuracy
type: accuracy
value: 0.9303904923599321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Precision: 0.9296
- Recall: 0.9386
- F1: 0.9341
- Accuracy: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 0.9987 | 0.7470 | 0.7957 | 0.7706 | 0.8043 |
| 1.3632 | 8.33 | 500 | 0.5299 | 0.8641 | 0.8855 | 0.8747 | 0.8829 |
| 1.3632 | 12.5 | 750 | 0.3861 | 0.8853 | 0.9124 | 0.8986 | 0.9126 |
| 0.3151 | 16.67 | 1000 | 0.3392 | 0.9154 | 0.9311 | 0.9232 | 0.9321 |
| 0.3151 | 20.83 | 1250 | 0.3382 | 0.9247 | 0.9371 | 0.9309 | 0.9308 |
| 0.1265 | 25.0 | 1500 | 0.3364 | 0.9225 | 0.9356 | 0.9290 | 0.9300 |
| 0.1265 | 29.17 | 1750 | 0.3333 | 0.9304 | 0.9401 | 0.9352 | 0.9321 |
| 0.0716 | 33.33 | 2000 | 0.3381 | 0.9296 | 0.9394 | 0.9345 | 0.9312 |
| 0.0716 | 37.5 | 2250 | 0.3474 | 0.9290 | 0.9409 | 0.9349 | 0.9321 |
| 0.0525 | 41.67 | 2500 | 0.3404 | 0.9296 | 0.9386 | 0.9341 | 0.9304 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
hopkins/strict-small-1
|
hopkins
| 2023-04-02T19:45:32Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-02T18:06:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: strict-small-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# strict-small-1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.5772 | 49.96 | 400 | 6.3333 |
| 1.4544 | 99.96 | 800 | 8.0001 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
johnjose223/distiled_bert-finetuned-squad_v2
|
johnjose223
| 2023-04-02T19:42:30Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-03-17T06:09:15Z |
---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distiled_bert-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distiled_bert-finetuned-squad_v2
This model is a fine-tuned version of [johnjose223/distiled_bert-finetuned-squad_v2](https://huggingface.co/johnjose223/distiled_bert-finetuned-squad_v2) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.27.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Krish2002/Puppy_play
|
Krish2002
| 2023-04-02T19:26:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-04-02T19:25:18Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
|
ritakurban/DistilGPT_PubMedQA
|
ritakurban
| 2023-04-02T19:08:39Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-02T19:00:17Z |
# DistilGPT2 Fine-Tuned on PubMedQA Artificial Subset
This model is a fine-tuned version of the DistilGPT2 for question-answering tasks in the biomedical domain. The model was trained on a subset of 50,000 artificial samples from the PubMedQA dataset.
## Model Details
1. Model architecture: DistilGPT2
2. Training dataset: 50,000 samples from PubMedQA artificial subset
3. Training epochs: 3
4. Tokenizer maximum length: 512
5. Fine-Tuning Details: Model finetuning was done for three epochs using a standard model training pipeline provided by the Huggingface library. During training, the tokenizer was configured with a maximum token length of 512.
# Example Usage
You can use this model for medical question-answering tasks by simply loading it with the Huggingface transformers library and providing a prompt.
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("ritakurban/DistilGPT_PubMedQA")
model = GPT2LMHeadModel.from_pretrained("ritakurban/DistilGPT_PubMedQA")
prompt = "question: What is the primary function of the liver? context: The liver is a vital organ that performs several essential functions, including detoxification, protein synthesis, and the production of biochemicals necessary for digestion."
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
```
Evaluation Metrics
Model performance was evaluated using Semantic Textual Similarity, Word Mover's Distance, and Grammar Errors. Detailed evaluation results can be found in the accompanying paper.
Limitations
While this model has been fine-tuned on a specific biomedical dataset, it may not perform equally well on other medical or general domain questions. Additionally, the model may generate plausible-sounding but incorrect answers. Always verify the generated answers with reliable sources before using them for critical decision-making.
Acknowledgements
We want to thank Huggingface for their excellent transformers library and the creators of the original DistilGPT2 model. We also thank the authors of the PubMedQA dataset for providing a valuable resource for training and evaluating biomedical question-answering models.
|
JKSoon/sd-class-cats
|
JKSoon
| 2023-04-02T18:34:35Z | 31 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-04-02T18:34:06Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('JKSoon/sd-class-cats')
image = pipeline().images[0]
image
```
|
Bahasalab/BahasaGPT-1
|
Bahasalab
| 2023-04-02T18:27:14Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] | null | 2023-04-02T16:18:51Z |
---
license: bigscience-bloom-rail-1.0
---
# BahasaGPT-1 Fine-Tuning Documentation Summary
## Introduction
This document provides an overview of the BahasaGPT-1 model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 70,000 Indonesian instructions.
## Model Details
**Model Name:** BahasaGPT-1
**Model Source:** Bloomz-7B-mt
**Dataset for Fine-Tuning:** Over 70k Indonesia Instruct Dataset generated using the Alpaca method from the following sources:
- [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- Translated instructions from OA ([Anh/data at main · LAION-AI/Anh](https://github.com/LAION-AI/Anh))
## Fine-Tuning Process
The BahasaGPT-1 model was fine-tuned using a dataset of over 70,000 Indonesian instructions, which were generated using the Alpaca method from Stanford and translated instructions from OA. This combination of datasets allowed the model to be better adapted to the specific needs of Indonesian language tasks.
The fine-tuning process involved adjusting the model's weights and biases based on the input dataset. This was done iteratively to optimize the model's performance for the specific task in the Indonesian language.
## Known Limitations
Despite the successful fine-tuning, the BahasaGPT-1 model still has some limitations:
1. **Hallucination:** The model sometimes generates outputs that may seem plausible but are not based on the input data. This may lead to incorrect or nonsensical responses in some cases.
2. **Repeated Tokens:** The model occasionally produces repeated tokens in the output, which may affect the overall coherence and readability of the generated text.
## Conclusion
The BahasaGPT-1 model is a fine-tuned language model for Indonesian language tasks, based on the Bloomz-7B-mt architecture. The model was trained on a dataset of over 70,000 Indonesian instructions generated using the Alpaca method and translated instructions from OA. Despite some limitations, such as occasional hallucination and repeated tokens, the model provides a valuable tool for working with Indonesian language tasks.
## How to Run
```import logging
from typing import Tuple
import torch
import numpy as np
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
PreTrainedModel,
PreTrainedTokenizer,
)
END_KEY = "### End"
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY_NL = f"### Response:\n"
DEFAULT_SEED = 42
# The format of the instruction the model has been trained on.
PROMPT_FORMAT = """%s
%s
{instruction}
%s""" % (
"Dibawah ini adalah instruksi yang menjelaskan suatu tugas.",
INSTRUCTION_KEY,
RESPONSE_KEY_NL,
)
def xglm_prompt(dic):
if dic.get("input") is None:
text = PROMPT_DICT['prompt_no_input'].format_map(dic)
else:
text = PROMPT_DICT['prompt_input'].format_map(dic)
return text
logger = logging.getLogger(__name__)
def load_model_tokenizer_for_generate(
pretrained_model_name_or_path: str,
) -> Tuple[PreTrainedModel, PreTrainedTokenizer]:
"""Loads the model and tokenizer so that it can be used for generating responses.
Args:
pretrained_model_name_or_path (str): name or path for model
Returns:
Tuple[PreTrainedModel, PreTrainedTokenizer]: model and tokenizer
"""
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, padding_side="left")
model = AutoModelForCausalLM.from_pretrained(
pretrained_model_name_or_path,load_in_8bit=True, device_map="auto", trust_remote_code=True
)
return model, tokenizer
def get_special_token_id(tokenizer: PreTrainedTokenizer, key: str) -> int:
"""Gets the token ID for a given string that has been added to the tokenizer as a special token.
When training, we configure the tokenizer so that the sequences like "### Instruction:" and "### End" are
treated specially and converted to a single, new token. This retrieves the token ID each of these keys map to.
Args:
tokenizer (PreTrainedTokenizer): the tokenizer
key (str): the key to convert to a single token
Raises:
RuntimeError: if more than one ID was generated
Returns:
int: the token ID for the given key
"""
token_ids = tokenizer.encode(key)
if len(token_ids) > 1:
raise RuntimeError(f"Expected only a single token for '{key}' but found {token_ids}")
return token_ids[0]
def generate_response(
instruction: str,
*,
model: PreTrainedModel,
tokenizer: PreTrainedTokenizer,
do_sample: bool = True,
max_new_tokens: int = 256,
top_p: float = 0.92,
top_k: int = 40,
**kwargs,
) -> str:
"""Given an instruction, uses the model and tokenizer to generate a response. This formats the instruction in
the instruction format that the model was fine-tuned on.
Args:
instruction (str): instruction to generate response for
model (PreTrainedModel): model to use
tokenizer (PreTrainedTokenizer): tokenizer to use
do_sample (bool, optional): Whether or not to use sampling. Defaults to True.
max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128.
top_p (float, optional): If set to float < 1, only the smallest set of most probable tokens with probabilities
that add up to top_p or higher are kept for generation. Defaults to 0.92.
top_k (int, optional): The number of highest probability vocabulary tokens to keep for top-k-filtering.
Defaults to 0.
Returns:
str: the generated response
"""
print(PROMPT_FORMAT.format(instruction=instruction))
input_ids = tokenizer(PROMPT_FORMAT.format(instruction=instruction), return_tensors="pt").input_ids.to("cuda")
response_key_token_id = get_special_token_id(tokenizer, RESPONSE_KEY_NL)
end_key_token_id = get_special_token_id(tokenizer, END_KEY)
gen_tokens = model.generate(
input_ids,
pad_token_id=tokenizer.pad_token_id,
# Ensure generation stops once it generates "### End"
eos_token_id=end_key_token_id,
do_sample=do_sample,
max_new_tokens=max_new_tokens,
top_p=top_p,
no_repeat_ngram_size=5,
repetition_penalty=1.0,
num_beams=4,
top_k=top_k,
**kwargs,
)[0].cpu()
# The response will be set to this variable if we can identify it.
decoded = None
# Find where "### Response:" is first found in the generated tokens. Considering this is part of the prompt,
# we should definitely find it. We will return the tokens found after this token.
response_pos = None
response_positions = np.where(gen_tokens == response_key_token_id)[0]
if len(response_positions) == 0:
logger.warn(f"Could not find response key {response_key_token_id} in: {gen_tokens}")
else:
response_pos = response_positions[0]
if response_pos:
# Next find where "### End" is located. The model has been trained to end its responses with this sequence
# (or actually, the token ID it maps to, since it is a special token). We may not find this token, as the
# response could be truncated. If we don't find it then just return everything to the end. Note that
# even though we set eos_token_id, we still see the this token at the end.
end_pos = None
end_positions = np.where(gen_tokens == end_key_token_id)[0]
if len(end_positions) > 0:
end_pos = end_positions[0]
decoded = tokenizer.decode(gen_tokens[response_pos + 1 : end_pos]).strip()
return decoded
model ,tokenizer = load_model_tokenizer_for_generate(pretrained_model_name_or_path="Bahasalab/BahasaGPT-1")
def main():
while True:
instruction = input("Enter your instruction (type 'exit' to quit): ")
if instruction.lower() == "exit":
break
response = generate_response(model=model, tokenizer=tokenizer, instruction=instruction)
print(response)
if __name__ == "__main__":
main()```
|
IlyaGusev/llama_7b_ru_turbo_alpaca_lora_merged
|
IlyaGusev
| 2023-04-02T17:39:01Z | 14 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text2text-generation",
"ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-03-30T16:39:01Z |
---
datasets:
- IlyaGusev/ru_turbo_alpaca
language:
- ru
widget:
- text: |-
Задание: Сочини длинный рассказ, обязательно упоминая следующие объекты.
Вход: Таня, мяч
Выход:
example_title: Таня и мяч
- text: |-
Задание: Заполни пропуск в предложении, выведи только одно слово.
Вход: Я пытался ____ от маньяка, но он меня настиг.
Выход:
example_title: Маньяк
- text: |-
Как приготовить лазанью?
Ответ:
example_title: Лазанья
- text: |-
Вопрос: Почему трава зелёная?
Ответ:
example_title: Зелёная трава
- text: >-
Могут ли в природе встретиться в одном месте белый медведь и пингвин? Если
нет, то почему?
Выход:
example_title: Медведь и пигвин
- text: |-
Задание: Реши уравнение: 4x + 5 = 21
Выход:
example_title: Уравнение
pipeline_tag: text2text-generation
---
|
fladdict/watercolor
|
fladdict
| 2023-04-02T17:35:49Z | 0 | 18 | null |
[
"Stable-Diffusion",
"stable-diffusion-diffusers",
"lora",
"Diffusers",
"en",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-31T03:31:12Z |
---
license: creativeml-openrail-m
language:
- en
- ja
tags:
- Stable-Diffusion
- stable-diffusion-diffusers
- lora
- Diffusers
---
# 【LoRA】 fladdict-watercolor-sd
* [LoRa for SD v1.5](https://huggingface.co/fladdict/watercolor/blob/main/fladdict-watercolor-sd-1-5.safetensors)
* [LoRa for SD v2.1](https://huggingface.co/fladdict/watercolor/blob/main/fladdict-watercolor-sd-2-1.safetensors)
LoRA focused on watercolor paintings.
All training data is from public domain historical paintings.
## Instance Prompts
* watercolor painting
* impressionism, watercolor painting






## Model Description
- **Model type:** [LoRA]
- **Language(s) (NLP):** [English]
- **Model Discription:** Model trained with **runwayml/stable-diffusion-v1-5** and **stability-ai/stable-diffusion-v2-1**
## Sample Prompts
* watercolor painting, masterpiece landscape of forest and river.
* impressionism, watercolor painting, beautiful flower with vase on the table.
## Recommendations
* use weight between 0.1 - 1.0 (depends on subject and touch you want)
* try with related tags like realism, impressionism, expressionism, drawing.
* try with historical painter name (non watercolor painter is also works fine).
* describe background and lighting.
## Information
https://twitter.com/fladdict/
|
ys7yoo/sentence-roberta-large-klue-nli-sts-all
|
ys7yoo
| 2023-04-02T17:29:51Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-02T17:22:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 762 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 305,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
phqlong/q-FrozenLake-v1-4x4-noSlippery
|
phqlong
| 2023-04-02T17:26:07Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T17:26:04Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="phqlong/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hristo/default_settings_taxi
|
Hristo
| 2023-04-02T17:25:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T13:15:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: default_settings_taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Hristo/default_settings_taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ys7yoo/sentence-bert-klue-nli-sts-all
|
ys7yoo
| 2023-04-02T17:22:03Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-04-02T17:19:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 762 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 305,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
c0ldstudy/unit4
|
c0ldstudy
| 2023-04-02T17:16:48Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T17:16:42Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: unit4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Bandika/QTaxiV3
|
Bandika
| 2023-04-02T17:11:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T17:03:18Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: QTaxiV3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
|
Aspik101/Alpaca7b_python_assistant
|
Aspik101
| 2023-04-02T17:11:09Z | 0 | 2 | null |
[
"en",
"dataset:OllieStanley/humaneval-mbpp-codegen-qa",
"dataset:OllieStanley/humaneval-mbpp-testgen-qa",
"license:mit",
"region:us"
] | null | 2023-04-02T16:43:23Z |
---
language:
- en
license: mit
datasets:
- OllieStanley/humaneval-mbpp-codegen-qa
- OllieStanley/humaneval-mbpp-testgen-qa
---
This repo contains a low-rank adapter for LLaMA-7b fit on the data containing a request to write a python function.
### How to use (8-bit)
```python
import torch
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-13b-hf")
model = LLaMAForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=True,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "Aspik101/Alpaca7b_python_assistant")
def get_answer(question, model_version = model):
PROMPT =f'''Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{question}
### Response:
'''
inputs = tokenizer(
PROMPT,
return_tensors="pt",
)
input_ids = inputs["input_ids"].cuda()
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.95,
repetition_penalty=1.15,
)
print("Generating...")
generation_output = model_version.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128,
)
sentences = " ".join([tokenizer.decode(s) for s in generation_output.sequences])
print(sentences.split("Response:\n")[1])
```
### Examples
```python
get_answer("Write a function that read csv by pandas")
Generating...
def read_csv(file):
df = pd.read_csv('data/test.csv')
get_answer("Write a function that check if number is even")
Generating...
def is_even(n):
return n %2 ==0
```
|
c0ldstudy/dqn-SpaceInvadersNoFrameskip-v4
|
c0ldstudy
| 2023-04-02T17:04:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T17:04:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 854.00 +/- 277.48
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga c0ldstudy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga c0ldstudy -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga c0ldstudy
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
lunnan/a2c-AntBulletEnv-v0
|
lunnan
| 2023-04-02T16:55:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T16:54:30Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1952.79 +/- 100.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dededivad/a2c-PandaReachDense-v2
|
dededivad
| 2023-04-02T16:54:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T16:52:11Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -3.65 +/- 0.85
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
manuelmaiorano/poca-SoccerTwos
|
manuelmaiorano
| 2023-04-02T16:52:44Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-04-02T16:52:37Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: manuelmaiorano/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LarryAIDraw/PrinzEugenAzurLaneLORA_v4
|
LarryAIDraw
| 2023-04-02T16:36:56Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-02T16:21:41Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/27598/prinz-eugen-azur-lane-lora
|
pomp/a2c-PandaReachDense-v2
|
pomp
| 2023-04-02T16:16:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-02T16:13:16Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.65 +/- 0.79
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vocabtrimmer/mbart-large-cc25-trimmed-ko
|
vocabtrimmer
| 2023-04-02T16:12:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-02T16:08:37Z |
# Vocabulary Trimmed [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25): `vocabtrimmer/mbart-large-cc25-trimmed-ko`
This model is a trimmed version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | facebook/mbart-large-cc25 | vocabtrimmer/mbart-large-cc25-trimmed-ko |
|:---------------------------|:----------------------------|:-------------------------------------------|
| parameter_size_full | 610,851,840 | 402,585,600 |
| parameter_size_embedding | 512,055,296 | 95,522,816 |
| vocab_size | 250,027 | 46,642 |
| compression_rate_full | 100.0 | 65.91 |
| compression_rate_embedding | 100.0 | 18.65 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
| ko | vocabtrimmer/mc4_validation | text | ko | validation | | 2 |
|
aimarsg/prueba5
|
aimarsg
| 2023-04-02T15:57:53Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-02T14:48:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: prueba5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prueba5
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2442
- Precision: 0.5258
- Recall: 0.5574
- F1: 0.5411
- Accuracy: 0.9609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.75e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 57 | 0.2341 | 0.0 | 0.0 | 0.0 | 0.9488 |
| No log | 2.0 | 114 | 0.2411 | 0.0 | 0.0 | 0.0 | 0.9488 |
| No log | 3.0 | 171 | 0.2150 | 0.0385 | 0.0055 | 0.0096 | 0.9410 |
| No log | 4.0 | 228 | 0.1885 | 0.25 | 0.0929 | 0.1355 | 0.9500 |
| No log | 5.0 | 285 | 0.1730 | 0.3830 | 0.1967 | 0.2599 | 0.9524 |
| No log | 6.0 | 342 | 0.1591 | 0.5098 | 0.2842 | 0.3649 | 0.9581 |
| No log | 7.0 | 399 | 0.1665 | 0.5405 | 0.3279 | 0.4082 | 0.9609 |
| No log | 8.0 | 456 | 0.1856 | 0.5294 | 0.4918 | 0.5099 | 0.9604 |
| 0.1706 | 9.0 | 513 | 0.1727 | 0.5 | 0.5191 | 0.5094 | 0.9611 |
| 0.1706 | 10.0 | 570 | 0.1717 | 0.5669 | 0.4863 | 0.5235 | 0.9639 |
| 0.1706 | 11.0 | 627 | 0.1913 | 0.5024 | 0.5628 | 0.5309 | 0.9601 |
| 0.1706 | 12.0 | 684 | 0.1793 | 0.515 | 0.5628 | 0.5379 | 0.9619 |
| 0.1706 | 13.0 | 741 | 0.2009 | 0.5679 | 0.5027 | 0.5333 | 0.9618 |
| 0.1706 | 14.0 | 798 | 0.2043 | 0.5333 | 0.5683 | 0.5503 | 0.9604 |
| 0.1706 | 15.0 | 855 | 0.2052 | 0.5486 | 0.5246 | 0.5363 | 0.9629 |
| 0.1706 | 16.0 | 912 | 0.2234 | 0.5183 | 0.5410 | 0.5294 | 0.9581 |
| 0.1706 | 17.0 | 969 | 0.2157 | 0.5424 | 0.5246 | 0.5333 | 0.9616 |
| 0.0202 | 18.0 | 1026 | 0.2207 | 0.5025 | 0.5574 | 0.5285 | 0.9596 |
| 0.0202 | 19.0 | 1083 | 0.2297 | 0.5025 | 0.5410 | 0.5211 | 0.9573 |
| 0.0202 | 20.0 | 1140 | 0.2264 | 0.5131 | 0.5355 | 0.5241 | 0.9593 |
| 0.0202 | 21.0 | 1197 | 0.2300 | 0.5181 | 0.5464 | 0.5319 | 0.9593 |
| 0.0202 | 22.0 | 1254 | 0.2348 | 0.5241 | 0.5355 | 0.5297 | 0.9604 |
| 0.0202 | 23.0 | 1311 | 0.2372 | 0.5196 | 0.5792 | 0.5478 | 0.9588 |
| 0.0202 | 24.0 | 1368 | 0.2349 | 0.5319 | 0.5464 | 0.5391 | 0.9613 |
| 0.0202 | 25.0 | 1425 | 0.2353 | 0.5312 | 0.5574 | 0.544 | 0.9619 |
| 0.0202 | 26.0 | 1482 | 0.2388 | 0.5489 | 0.5519 | 0.5504 | 0.9614 |
| 0.0044 | 27.0 | 1539 | 0.2396 | 0.5243 | 0.5301 | 0.5272 | 0.9618 |
| 0.0044 | 28.0 | 1596 | 0.2442 | 0.5152 | 0.5574 | 0.5354 | 0.9603 |
| 0.0044 | 29.0 | 1653 | 0.2444 | 0.5178 | 0.5574 | 0.5368 | 0.9604 |
| 0.0044 | 30.0 | 1710 | 0.2442 | 0.5258 | 0.5574 | 0.5411 | 0.9609 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.