modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 18:28:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
XianTong/sovits4.1-genshin | XianTong | 2023-09-07T03:53:22Z | 0 | 4 | null | [
"region:us"
]
| null | 2023-08-15T09:24:03Z | sovits4.1原神角色语音模型
作者:在下先通
# 声明
使用者应当遵循以下规则:
1. 模型仅用于个人娱乐研究使用,不可商用,不可用于违法用途。
2. 使用时请填写完整借物表。
3. 若模型因使用不当而导致不良影响,一切应由使用者自行承担,与模型作者无关。
4. 声音版权归属于miHoYo及角色cv,如有侵权,请联系删除。
# 使用方法
模型分为两部分:sovits主体模型和扩散模型。
扩散模型不是必须的,但是使用它效果会更好一些。
- 主要模型:
主体模型为`角色名_G_xx000.pth`文件,与其它模型`角色名_kmeans_xxxxx.pt`或者`角色名_feature_and_index`文件放于`logs/44k/`文件夹下;
其中kmeans和feature分别为聚类模型以及特征检索模型,与扩散模型一样不是必要的模型文件。
`角色名.json`为配置文件,放于`config`文件夹下。
- 扩散模型
连同`diffusion`文件夹放到`logs/44k/`文件夹下。
`角色名_diffusion.yaml`为配置文件,与上面的`.json`文件一样放于`config`文件夹下。 |
CyberHarem/akagi_miria_theidolmastercinderellagirlsu149 | CyberHarem | 2023-09-07T03:51:31Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/akagi_miria_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-07T03:34:46Z | ---
license: mit
datasets:
- CyberHarem/akagi_miria_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of akagi_miria_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5600, you need to download `5600/akagi_miria_theidolmastercinderellagirlsu149.pt` as the embedding and `5600/akagi_miria_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5600**, with the score of 0.960. The trigger words are:
1. `akagi_miria_theidolmastercinderellagirlsu149`
2. `short_hair, black_hair, brown_eyes, brown_hair, two_side_up, smile, open_mouth, upper_body, hair_ornament`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:----------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6000 | 0.953 | [Download](6000/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| **5600** | **0.960** | [**Download**](5600/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 5200 | 0.947 | [Download](5200/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4800 | 0.893 | [Download](4800/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4400 | 0.925 | [Download](4400/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 4000 | 0.927 | [Download](4000/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3600 | 0.900 | [Download](3600/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3600/previews/nude.png) | [<NSFW, click to see>](3600/previews/nude2.png) |  |  |
| 3200 | 0.933 | [Download](3200/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2800 | 0.883 | [Download](2800/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2400 | 0.946 | [Download](2400/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 2000 | 0.837 | [Download](2000/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1600 | 0.827 | [Download](1600/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1600/previews/nude.png) | [<NSFW, click to see>](1600/previews/nude2.png) |  |  |
| 1200 | 0.846 | [Download](1200/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [<NSFW, click to see>](1200/previews/nude2.png) |  |  |
| 800 | 0.851 | [Download](800/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [<NSFW, click to see>](800/previews/nude2.png) |  |  |
| 400 | 0.621 | [Download](400/akagi_miria_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [<NSFW, click to see>](400/previews/nude2.png) |  |  |
|
budecosystem/Tansen | budecosystem | 2023-09-07T03:47:50Z | 0 | 6 | null | [
"license:openrail++",
"region:us"
]
| null | 2023-09-05T15:00:57Z | ---
license: openrail++
---
<p align="center">
<img src="https://raw.githubusercontent.com/BudEcosystem/Tansen/main/Instagram%20post%20-%204.png" alt="Tensen Logo" width="300" height="300"/>
</p>
---
<p align="center"><i>Democratizing access to LLMs, Multi-Modal Gen AI models for the open-source community.<br>Let's advance AI, together. </i></p>
---
Tansen is a text-to-speech program built with the following priorities:
1. Strong multi-voice capabilities.
2. Highly realistic prosody and intonation.
3. Speaking rate control
<a href="https://github.com/BudEcosystem/Tansen"><img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white" /> </a>
<h2 align="left">🎧 Demos </h2>
### Demos
[random_0_0.webm](https://github.com/BudEcosystem/Tansen/assets/4546714/9a6ce191-2646-497e-bf48-003f2bf0bb8d)
[random_0_1.webm](https://github.com/BudEcosystem/Tansen/assets/4546714/87bf5f7c-ae47-4aa4-a110-b5c9899e4446)
[random_0_2.webm](https://github.com/BudEcosystem/Tansen/assets/4546714/5549c464-c670-4e7a-987c-c5d79b32bf4b)
<h2 align="left">💻 Getting Started on GitHub </h2>
Ready to dive in? Here's how you can get started with our repo on GitHub.
<h3 align="left">1️⃣ : Clone our GitHub repository</h3>
First things first, you'll need to clone our repository. Open up your terminal, navigate to the directory where you want the repository to be cloned, and run the following command:
```bash
conda create --name Tansen python=3.9 numba inflect
conda activate Tansen
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install transformers=4.29.2
git clone https://github.com/BudEcosystem/Tansen.git
cd Tansen
```
<h3 align="left">2️⃣ : Install dependencies</h3>
```bash
python setup.py install
```
<h3 align="left">3️⃣ : Generate Audio</h3>
### do_tts.py
This script allows you to speak a single phrase with one or more voices.
```shell
python do_tts.py --text "I'm going to speak this" --voice random --preset fast
```
### read.py
This script provides tools for reading large amounts of text.
```shell
python Tansen/read.py --textfile <your text to be read> --voice random
```
This will break up the textfile into sentences, and then convert them to speech one at a time. It will output a series
of spoken clips as they are generated. Once all the clips are generated, it will combine them into a single file and
output that as well.
Sometimes Tansen screws up an output. You can re-generate any bad clips by re-running `read.py` with the --regenerate
argument.
Intrested in running as as API ?
### 🐍 Usage in Python
Tansen can be used programmatically :
```python
reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths]
tts = api.TextToSpeech(use_deepspeed=True, kv_cache=True, half=True)
pcm_audio = tts.tts_with_preset("your text here", voice_samples=reference_clips, preset='fast')
```
## Loss Curves
<p align="center">
<img src="https://raw.githubusercontent.com/BudEcosystem/Tansen/main/results/images/loss_mel_ce.png" alt="" width="500"/>
<span>loss_mel_ce</span>
<p>
<p align="center">
<img src="https://raw.githubusercontent.com/BudEcosystem/Tansen/main/results/images/loss_text_ce.png" alt="" width="500" />
<span>loss_text_ce</span>
<p>
## Training Information
Device : A Single A100
Dataset : 876 hours |
shengqin/bloomz-xss-sqli-30000-1epoch | shengqin | 2023-09-07T03:44:42Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-07T03:44:41Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Akshay95/t5_recommendation_sports_equipment | Akshay95 | 2023-09-07T03:41:53Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-06T09:21:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3702
- Rouge1: 30.0
- Rouge2: 5.0
- Rougel: 30.0
- Rougelsum: 30.0
- Gen Len: 4.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 5 | 11.3134 | 5.0 | 2.5 | 5.0 | 3.3333 | 11.85 |
| No log | 2.0 | 10 | 6.7971 | 1.6667 | 0.0 | 1.6667 | 1.6667 | 19.0 |
| No log | 3.0 | 15 | 3.5555 | 0.0 | 0.0 | 0.0 | 0.0 | 19.0 |
| No log | 4.0 | 20 | 1.3544 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 5.0 | 25 | 0.6216 | 5.0 | 5.0 | 5.0 | 5.0 | 4.8 |
| No log | 6.0 | 30 | 0.4824 | 25.0 | 5.0 | 25.0 | 25.0 | 4.95 |
| No log | 7.0 | 35 | 0.4357 | 5.0 | 5.0 | 5.0 | 5.0 | 4.7 |
| No log | 8.0 | 40 | 0.4148 | 25.0 | 5.0 | 25.0 | 25.0 | 4.7 |
| No log | 9.0 | 45 | 0.3853 | 25.0 | 5.0 | 25.0 | 25.0 | 4.7 |
| No log | 10.0 | 50 | 0.3702 | 30.0 | 5.0 | 30.0 | 30.0 | 4.9 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.8.0
- Tokenizers 0.13.3
|
Onutoa/2_1e-2_10_0.5 | Onutoa | 2023-09-07T03:34:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-06T23:54:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 2_1e-2_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_1e-2_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9669
- Accuracy: 0.7291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7272 | 1.0 | 590 | 2.1134 | 0.4018 |
| 2.2666 | 2.0 | 1180 | 3.2261 | 0.3783 |
| 2.3033 | 3.0 | 1770 | 2.2480 | 0.3783 |
| 2.1786 | 4.0 | 2360 | 2.7497 | 0.6208 |
| 2.1516 | 5.0 | 2950 | 1.7255 | 0.6492 |
| 1.9363 | 6.0 | 3540 | 3.4672 | 0.3783 |
| 2.0556 | 7.0 | 4130 | 2.9543 | 0.4664 |
| 2.0717 | 8.0 | 4720 | 1.9668 | 0.6297 |
| 2.238 | 9.0 | 5310 | 2.0150 | 0.6376 |
| 2.0674 | 10.0 | 5900 | 1.9047 | 0.6419 |
| 1.9777 | 11.0 | 6490 | 1.8100 | 0.6104 |
| 1.8447 | 12.0 | 7080 | 1.7533 | 0.6367 |
| 1.9655 | 13.0 | 7670 | 1.5246 | 0.6612 |
| 1.7583 | 14.0 | 8260 | 1.4859 | 0.6508 |
| 1.6346 | 15.0 | 8850 | 2.1240 | 0.6869 |
| 1.6424 | 16.0 | 9440 | 1.4976 | 0.6474 |
| 1.5083 | 17.0 | 10030 | 1.2798 | 0.6939 |
| 1.6096 | 18.0 | 10620 | 1.8015 | 0.6278 |
| 1.6952 | 19.0 | 11210 | 1.6068 | 0.6774 |
| 1.6535 | 20.0 | 11800 | 1.7095 | 0.6076 |
| 1.544 | 21.0 | 12390 | 1.4624 | 0.6832 |
| 1.5493 | 22.0 | 12980 | 1.3701 | 0.7015 |
| 1.4743 | 23.0 | 13570 | 1.3619 | 0.7040 |
| 1.4021 | 24.0 | 14160 | 1.2429 | 0.6832 |
| 1.3916 | 25.0 | 14750 | 1.4104 | 0.6853 |
| 1.3976 | 26.0 | 15340 | 1.3662 | 0.6621 |
| 1.4054 | 27.0 | 15930 | 1.3757 | 0.6382 |
| 1.282 | 28.0 | 16520 | 1.3488 | 0.6639 |
| 1.2595 | 29.0 | 17110 | 1.1823 | 0.6988 |
| 1.2441 | 30.0 | 17700 | 1.3444 | 0.7180 |
| 1.1883 | 31.0 | 18290 | 1.1253 | 0.7083 |
| 1.188 | 32.0 | 18880 | 1.1578 | 0.7229 |
| 1.1719 | 33.0 | 19470 | 1.2075 | 0.6884 |
| 1.1201 | 34.0 | 20060 | 1.0837 | 0.7156 |
| 1.1222 | 35.0 | 20650 | 1.1085 | 0.7015 |
| 1.0624 | 36.0 | 21240 | 1.3319 | 0.7196 |
| 1.0747 | 37.0 | 21830 | 1.3808 | 0.6560 |
| 1.028 | 38.0 | 22420 | 1.1399 | 0.7242 |
| 1.0343 | 39.0 | 23010 | 1.0303 | 0.7101 |
| 0.9876 | 40.0 | 23600 | 1.1261 | 0.7275 |
| 0.9899 | 41.0 | 24190 | 1.4611 | 0.7235 |
| 0.9883 | 42.0 | 24780 | 1.1315 | 0.7333 |
| 0.9558 | 43.0 | 25370 | 1.0614 | 0.7040 |
| 0.9663 | 44.0 | 25960 | 1.0889 | 0.7131 |
| 0.9311 | 45.0 | 26550 | 0.9791 | 0.7235 |
| 0.9269 | 46.0 | 27140 | 0.9895 | 0.7254 |
| 0.8845 | 47.0 | 27730 | 0.9648 | 0.7336 |
| 0.9076 | 48.0 | 28320 | 0.9665 | 0.7343 |
| 0.8691 | 49.0 | 28910 | 0.9858 | 0.7339 |
| 0.8558 | 50.0 | 29500 | 0.9660 | 0.7239 |
| 0.8443 | 51.0 | 30090 | 0.9774 | 0.7294 |
| 0.8341 | 52.0 | 30680 | 1.0947 | 0.7024 |
| 0.8268 | 53.0 | 31270 | 1.0108 | 0.7315 |
| 0.8243 | 54.0 | 31860 | 0.9856 | 0.7260 |
| 0.8072 | 55.0 | 32450 | 1.0354 | 0.7199 |
| 0.807 | 56.0 | 33040 | 0.9688 | 0.7269 |
| 0.8015 | 57.0 | 33630 | 0.9622 | 0.7291 |
| 0.771 | 58.0 | 34220 | 0.9676 | 0.7269 |
| 0.7829 | 59.0 | 34810 | 0.9740 | 0.7321 |
| 0.7862 | 60.0 | 35400 | 0.9669 | 0.7291 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Onutoa/2_9e-3_10_0.5 | Onutoa | 2023-09-07T03:33:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-06T23:53:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 2_9e-3_10_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_9e-3_10_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9490
- Accuracy: 0.7434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.009
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.4195 | 1.0 | 590 | 2.4975 | 0.3783 |
| 2.2824 | 2.0 | 1180 | 1.9145 | 0.6012 |
| 2.1458 | 3.0 | 1770 | 2.3359 | 0.6217 |
| 2.1747 | 4.0 | 2360 | 2.1157 | 0.6535 |
| 1.9504 | 5.0 | 2950 | 1.5636 | 0.6502 |
| 1.7882 | 6.0 | 3540 | 1.6203 | 0.6315 |
| 1.6871 | 7.0 | 4130 | 1.4819 | 0.6394 |
| 1.6471 | 8.0 | 4720 | 2.7794 | 0.6217 |
| 1.7323 | 9.0 | 5310 | 4.0220 | 0.6462 |
| 1.5353 | 10.0 | 5900 | 1.6458 | 0.6789 |
| 1.5678 | 11.0 | 6490 | 1.1800 | 0.7043 |
| 1.3291 | 12.0 | 7080 | 1.2374 | 0.7165 |
| 1.4272 | 13.0 | 7670 | 1.1377 | 0.7110 |
| 1.3034 | 14.0 | 8260 | 1.1466 | 0.7183 |
| 1.2451 | 15.0 | 8850 | 1.2199 | 0.7177 |
| 1.2807 | 16.0 | 9440 | 1.0946 | 0.7272 |
| 1.2129 | 17.0 | 10030 | 1.1599 | 0.7073 |
| 1.1857 | 18.0 | 10620 | 1.0682 | 0.7248 |
| 1.1625 | 19.0 | 11210 | 1.2619 | 0.7272 |
| 1.0859 | 20.0 | 11800 | 1.0746 | 0.7349 |
| 1.1021 | 21.0 | 12390 | 1.0435 | 0.7287 |
| 1.0416 | 22.0 | 12980 | 1.3806 | 0.7312 |
| 1.0426 | 23.0 | 13570 | 1.2656 | 0.7330 |
| 1.0436 | 24.0 | 14160 | 1.1256 | 0.7034 |
| 1.0052 | 25.0 | 14750 | 1.7754 | 0.7232 |
| 1.0031 | 26.0 | 15340 | 1.0313 | 0.7211 |
| 0.9812 | 27.0 | 15930 | 1.0008 | 0.7373 |
| 0.9123 | 28.0 | 16520 | 0.9610 | 0.7361 |
| 0.9127 | 29.0 | 17110 | 0.9778 | 0.7410 |
| 0.9232 | 30.0 | 17700 | 1.0516 | 0.7388 |
| 0.899 | 31.0 | 18290 | 1.0108 | 0.7183 |
| 0.8414 | 32.0 | 18880 | 1.0194 | 0.7416 |
| 0.8741 | 33.0 | 19470 | 1.1150 | 0.7135 |
| 0.8151 | 34.0 | 20060 | 1.1255 | 0.7385 |
| 0.864 | 35.0 | 20650 | 0.9919 | 0.7336 |
| 0.7863 | 36.0 | 21240 | 1.0934 | 0.7468 |
| 0.8047 | 37.0 | 21830 | 1.0928 | 0.7190 |
| 0.7751 | 38.0 | 22420 | 1.0014 | 0.7477 |
| 0.7889 | 39.0 | 23010 | 0.9600 | 0.7434 |
| 0.7376 | 40.0 | 23600 | 1.1391 | 0.7450 |
| 0.7727 | 41.0 | 24190 | 1.0360 | 0.7453 |
| 0.7564 | 42.0 | 24780 | 0.9761 | 0.7446 |
| 0.7398 | 43.0 | 25370 | 1.0142 | 0.7379 |
| 0.73 | 44.0 | 25960 | 1.0133 | 0.7407 |
| 0.7074 | 45.0 | 26550 | 0.9570 | 0.7431 |
| 0.7035 | 46.0 | 27140 | 0.9833 | 0.7474 |
| 0.6909 | 47.0 | 27730 | 1.0047 | 0.7346 |
| 0.7054 | 48.0 | 28320 | 1.0054 | 0.7440 |
| 0.6762 | 49.0 | 28910 | 0.9666 | 0.7495 |
| 0.6722 | 50.0 | 29500 | 0.9731 | 0.7404 |
| 0.6523 | 51.0 | 30090 | 0.9867 | 0.7422 |
| 0.6572 | 52.0 | 30680 | 0.9576 | 0.7468 |
| 0.6577 | 53.0 | 31270 | 0.9527 | 0.7456 |
| 0.6532 | 54.0 | 31860 | 0.9492 | 0.7453 |
| 0.6529 | 55.0 | 32450 | 0.9646 | 0.7404 |
| 0.6303 | 56.0 | 33040 | 0.9561 | 0.7434 |
| 0.6273 | 57.0 | 33630 | 0.9568 | 0.7465 |
| 0.6091 | 58.0 | 34220 | 0.9435 | 0.7483 |
| 0.6205 | 59.0 | 34810 | 0.9537 | 0.7483 |
| 0.6153 | 60.0 | 35400 | 0.9490 | 0.7434 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rizquuula/XLM-RoBERTa-IndoSQuADv2_1694025792-8-2e-06-0.01-5 | rizquuula | 2023-09-07T03:17:03Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-06T18:46:11Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: XLM-RoBERTa-IndoSQuADv2_1694025792-8-2e-06-0.01-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-IndoSQuADv2_1694025792-8-2e-06-0.01-5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.369 | 1.0 | 16290 | 1.9995 |
| 1.7782 | 2.0 | 32580 | 1.8684 |
| 1.6566 | 3.0 | 48870 | 1.8263 |
| 1.5882 | 4.0 | 65160 | 1.8315 |
| 1.5493 | 5.0 | 81450 | 1.8240 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
chgenly/ppo-Huggy | chgenly | 2023-09-07T03:16:36Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-07T01:52:22Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chgenly/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Hansr/Main | Hansr | 2023-09-07T03:00:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-05T03:38:37Z | ---
license: creativeml-openrail-m
---
|
CyberHarem/sakurai_momoka_theidolmastercinderellagirlsu149 | CyberHarem | 2023-09-07T03:00:11Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/sakurai_momoka_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-07T02:41:44Z | ---
license: mit
datasets:
- CyberHarem/sakurai_momoka_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of sakurai_momoka_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5720, you need to download `5720/sakurai_momoka_theidolmastercinderellagirlsu149.pt` as the embedding and `5720/sakurai_momoka_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5720**, with the score of 0.981. The trigger words are:
1. `sakurai_momoka_theidolmastercinderellagirlsu149`
2. `blonde_hair, hairband, short_hair, green_eyes, bangs, upper_body, smile, bow, open_mouth, wavy_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | pattern_22 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6600 | 0.973 | [Download](6600/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 6160 | 0.947 | [Download](6160/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) |  |  |
| **5720** | **0.981** | [**Download**](5720/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5280 | 0.916 | [Download](5280/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4840 | 0.918 | [Download](4840/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) |  |  |
| 4400 | 0.851 | [Download](4400/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) |  |  |
| 3960 | 0.945 | [Download](3960/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3520 | 0.956 | [Download](3520/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) |  |  |
| 3080 | 0.940 | [Download](3080/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) |  |  |
| 2640 | 0.916 | [Download](2640/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 2200 | 0.935 | [Download](2200/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) |  |  |
| 1760 | 0.974 | [Download](1760/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) |  |  |
| 1320 | 0.930 | [Download](1320/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 880 | 0.946 | [Download](880/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) |  |  |
| 440 | 0.864 | [Download](440/sakurai_momoka_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) |  |  |
|
ShreyasM/ppo-Huggy4090 | ShreyasM | 2023-09-07T02:23:36Z | 32 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-07T02:21:29Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ShreyasM/ppo-Huggy4090
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YassineBenlaria/m2m100_418M_tq_fr_new_data | YassineBenlaria | 2023-09-07T02:15:46Z | 20 | 0 | transformers | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:YassineBenlaria/m2m100_418M_tq_fr_1",
"base_model:finetune:YassineBenlaria/m2m100_418M_tq_fr_1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-24T23:26:28Z | ---
base_model: heisenberg1337/m2m100_418M_tq_fr_1
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M_tq_fr_new_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M_tq_fr_new_data
This model is a fine-tuned version of [heisenberg1337/m2m100_418M_tq_fr_1](https://huggingface.co/heisenberg1337/m2m100_418M_tq_fr_1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7732
- Bleu: 5.0521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7993 | 1.69 | 100 | 0.7767 | 4.9208 |
| 0.7761 | 3.38 | 200 | 0.7743 | 4.7089 |
| 0.7723 | 5.06 | 300 | 0.7726 | 4.8445 |
| 0.7585 | 6.75 | 400 | 0.7720 | 4.8352 |
| 0.7468 | 8.44 | 500 | 0.7732 | 5.1454 |
| 0.7331 | 10.13 | 600 | 0.7744 | 4.8311 |
| 0.7301 | 11.81 | 700 | 0.7732 | 5.0521 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
rizquuula/XLM-RoBERTa-IndoSQuADv2_1694025616-16-2e-06-0.01-5 | rizquuula | 2023-09-07T02:13:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-06T18:43:15Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: XLM-RoBERTa-IndoSQuADv2_1694025616-16-2e-06-0.01-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-IndoSQuADv2_1694025616-16-2e-06-0.01-5
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8503
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.475 | 1.0 | 8145 | 2.0453 |
| 1.8481 | 2.0 | 16290 | 1.9140 |
| 1.7296 | 3.0 | 24435 | 1.8664 |
| 1.6676 | 4.0 | 32580 | 1.8543 |
| 1.6342 | 5.0 | 40725 | 1.8503 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
NEO946B/a2c-PandaPickAndPlace-v3 | NEO946B | 2023-09-07T02:13:11Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-09T11:09:58Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
folflo/mt5-small-finetuned-HunSum-1_v0905 | folflo | 2023-09-07T02:08:09Z | 59 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-06T10:20:15Z | ---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: folflo/mt5-small-finetuned-HunSum-1_v0905
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# folflo/mt5-small-finetuned-HunSum-1_v0905
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.0197
- Validation Loss: 2.6041
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 103120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2082 | 2.8691 | 0 |
| 3.2184 | 2.6935 | 1 |
| 3.0197 | 2.6041 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.12.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
CzarnyRycerz/pyramids-model-1 | CzarnyRycerz | 2023-09-07T02:04:31Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-09-07T01:40:08Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: CzarnyRycerz/pyramids-model-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CyberHarem/tachibana_arisu_theidolmastercinderellagirlsu149 | CyberHarem | 2023-09-07T02:03:46Z | 0 | 1 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/tachibana_arisu_theidolmastercinderellagirlsu149",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-07T01:45:14Z | ---
license: mit
datasets:
- CyberHarem/tachibana_arisu_theidolmastercinderellagirlsu149
pipeline_tag: text-to-image
tags:
- art
---
# Lora of tachibana_arisu_theidolmastercinderellagirlsu149
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5000, you need to download `5000/tachibana_arisu_theidolmastercinderellagirlsu149.pt` as the embedding and `5000/tachibana_arisu_theidolmastercinderellagirlsu149.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5000**, with the score of 0.978. The trigger words are:
1. `tachibana_arisu_theidolmastercinderellagirlsu149`
2. `brown_hair, long_hair, brown_eyes, bow, hair_bow, upper_body, blue_bow, closed_mouth, bangs`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:--------------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.858 | [Download](7500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.863 | [Download](7000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.900 | [Download](6500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.896 | [Download](6000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.858 | [Download](5500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| **5000** | **0.978** | [**Download**](5000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.817 | [Download](4500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.850 | [Download](4000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| 3500 | 0.854 | [Download](3500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.890 | [Download](3000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.845 | [Download](2500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.832 | [Download](2000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.896 | [Download](1500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.816 | [Download](1000/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.516 | [Download](500/tachibana_arisu_theidolmastercinderellagirlsu149.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
mychen76/donut-base-sroie | mychen76 | 2023-09-07T02:00:20Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2023-09-07T00:08:55Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
gmongaras/wizardLM-7B-HF-8bit | gmongaras | 2023-09-07T01:47:58Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
]
| text-generation | 2023-09-07T00:12:30Z | # Original Repo:
https://huggingface.co/TheBloke/wizardLM-7B-HF/tree/main
I just took it from there and made it 4bit.
# WizardLM: An Instruction-following LLM Using Evol-Instruct
These files are the result of merging the [delta weights](https://huggingface.co/victor123/WizardLM) with the original Llama7B model.
The code for merging is provided in the [WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).
The original WizardLM deltas are in float32, and this results in producing an HF repo that is also float32, and is much larger than a normal 7B Llama model.
Therefore for this repo I converted the merged model to float16, to produce a standard size 7B model.
This was achieved by running **`model = model.half()`** prior to saving.
## WizardLM-7B HF
This repo contains the full unquantised model files in HF format for GPU inference and as a base for quantisation/conversion.
## Other repositories available
* [4bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GGML)
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model info
## Full details in the model's Github page
[WizardLM official Github repo](https://github.com/nlpxucan/WizardLM).
## Overview of Evol-Instruct
Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.
Although on our complexity-balanced test set, WizardLM-7B outperforms ChatGPT in the high-complexity instructions, it still lag behind ChatGPT on the entire test set, and we also consider WizardLM to still be in a baby state. This repository will continue to improve WizardLM, train on larger scales, add more training data, and innovate more advanced large-model training methods.


|
yesj1234/mbart_cycle1_ko-zh | yesj1234 | 2023-09-07T01:20:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"ko",
"zh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-04T22:47:58Z | ---
language:
- ko
- zh
base_model: ./reduced_model_zh
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: tst-translation-output2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-translation-output2
This model is a fine-tuned version of [mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an custom dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4005
- Bleu: 26.0229
- Gen Len: 15.1659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 4.5769 | 1.15 | 1000 | 4.0805 | 14.9483 | 30.0618 |
| 2.8098 | 2.31 | 2000 | 3.0612 | 19.7963 | 16.7121 |
| 1.7974 | 3.46 | 3000 | 2.8258 | 21.7059 | 15.5179 |
| 1.1474 | 4.62 | 4000 | 2.6951 | 22.4801 | 16.6382 |
| 0.8042 | 5.77 | 5000 | 2.7272 | 22.4419 | 15.1393 |
| 0.5605 | 6.93 | 6000 | 2.8239 | 23.1096 | 15.6457 |
| 0.3857 | 8.08 | 7000 | 2.9448 | 24.2536 | 15.1538 |
...
| 0.0042 | 40.42 | 35000 | 3.3485 | 25.2464 | 15.2387 |
| 0.0029 | 41.57 | 36000 | 3.3744 | 25.2885 | 15.1306 |
| 0.0026 | 42.73 | 37000 | 3.3947 | 25.9359 | 15.1896 |
| 0.0024 | 43.88 | 38000 | 3.3699 | 25.5309 | 15.2671 |
| 0.0022 | 45.03 | 39000 | 3.3947 | 25.2932 | 15.1387 |
| 0.0011 | 46.19 | 40000 | 3.4075 | 25.7551 | 15.1231 |
| 0.001 | 47.34 | 41000 | 3.3918 | 25.6345 | 15.1243 |
| 0.0007 | 48.5 | 42000 | 3.4063 | 25.7209 | 15.111 |
| 0.0006 | 49.65 | 43000 | 3.4003 | 25.9227 | 15.1873 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Futatabi6/F6_RVC | Futatabi6 | 2023-09-07T01:10:39Z | 0 | 0 | null | [
"license:openrail",
"region:us"
]
| null | 2023-09-06T17:43:52Z | ---
license: openrail
---
Please credit Futatabi6 for the voice models!
They were made in Applio. |
SaiedAlshahrani/bloom_360M_4bit_qlora_arc | SaiedAlshahrani | 2023-09-07T00:52:12Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_560M_8bit",
"base_model:finetune:asas-ai/bloom_560M_8bit",
"region:us"
]
| null | 2023-09-07T00:06:20Z | ---
base_model: asas-ai/bloom_360M_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_360M_4bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_360M_4bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_360M_8bit](https://huggingface.co/asas-ai/bloom_360M_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
awelivita/hugging_face_model | awelivita | 2023-09-07T00:47:37Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-07T00:46:32Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hugging_face_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hugging_face_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0084
- Accuracy: 0.5333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 8 | 1.0378 | 0.5333 |
| No log | 2.0 | 16 | 1.0084 | 0.5333 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
MoeenTB/q-Taxi-v3 | MoeenTB | 2023-09-07T00:38:53Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-07T00:38:51Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MoeenTB/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RitaQi/roberta-test2 | RitaQi | 2023-09-07T00:28:54Z | 180 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-06T21:37:37Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name: roberta-test2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-test2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the ag_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8000
- eval_batch_size: 8000
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cpu
- Datasets 2.14.4
- Tokenizers 0.13.3
|
RaymundoSGlz/distilroberta-base-mrpc-glue | RaymundoSGlz | 2023-09-07T00:09:31Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"glue",
"mrpc",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-06T21:57:24Z | ---
license: apache-2.0
tags:
- text-classification
- glue
- mrpc
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text:
- >-
Yucaipa owned Dominick 's before selling the chain to Safeway in 1998
for $ 2.5 billion.
- >-
Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to
Safeway for $ 1.8 billion in 1998.
example_title: Not Equivalent
- text:
- >-
Revenue in the first quarter of the year dropped 15 percent from the
same period a year earlier.
- >-
With the scandal hanging over Stewart's company revenue the first
quarter of the year dropped 15 percent from the same period a year
earlier.
example_title: Equivalent
model-index:
- name: distilroberta-base-mrpc-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8602941176470589
- name: F1
type: f1
value: 0.8994708994708994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5448
- Accuracy: 0.8603
- F1: 0.8995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4909 | 1.09 | 500 | 0.5448 | 0.8603 | 0.8995 |
| 0.3148 | 2.18 | 1000 | 0.6753 | 0.8431 | 0.8873 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rizquuula/XLM-RoBERTa-IndoSQuADv2_1694026058-8-2e-05-0.01-3 | rizquuula | 2023-09-06T23:59:40Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-06T18:50:38Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: XLM-RoBERTa-IndoSQuADv2_1694026058-8-2e-05-0.01-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-IndoSQuADv2_1694026058-8-2e-05-0.01-3
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8183 | 1.0 | 16290 | 1.7666 |
| 1.3623 | 2.0 | 32580 | 1.7385 |
| 1.1063 | 3.0 | 48870 | 1.8153 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
guydebruyn/ppo-LunarLander-v2 | guydebruyn | 2023-09-06T23:49:40Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T23:49:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.55 +/- 19.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rigon-tk/ppo-LunarLander-v2 | rigon-tk | 2023-09-06T23:39:28Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T23:39:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.75 +/- 21.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Infernaught/test_ap_weights | Infernaught | 2023-09-06T23:30:16Z | 5 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-06T23:29:54Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
amanastel/astel | amanastel | 2023-09-06T23:27:40Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-08-29T21:41:22Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of VM
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
dbmdz/bert-small-historic-multilingual-cased | dbmdz | 2023-09-06T22:19:54Z | 160 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/electra-base-turkish-mc4-cased-generator | dbmdz | 2023-09-06T22:19:47Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"fill-mask",
"tr",
"dataset:allenai/c4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: tr
license: mit
datasets:
- allenai/c4
widget:
- text: "[MASK] sözcüğü Türkçe kökenlidir"
---
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (cased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator")
model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
dbmdz/bert-base-german-cased | dbmdz | 2023-09-06T22:19:38Z | 45,677 | 19 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"doi:10.57967/hf/4377",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: de
license: mit
---
# 🤗 + 📚 dbmdz German BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models 🎉
# German BERT
## Stats
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
model by [deepset](https://deepset.ai/) we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
## Usage
With Transformers >= 2.3 our German BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
```
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/t5-base-conll03-english | dbmdz | 2023-09-06T22:19:24Z | 309 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
language: en
license: mit
datasets:
- conll2003
widget:
- text: My name is Clara Clever and I live in Berkeley , California .
---
# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003)
In this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset.
We use the great [TANL library](https://github.com/amazon-research/tanl) from Amazon for fine-tuning the model.
The exact approach of fine-tuning is presented in the "TANL: Structured Prediction as Translation between Augmented Natural Languages"
paper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto.
# Fine-Tuning
We use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model
on one V100 GPU and used gradient accumulation. The slighly modified configuration file (`config.ini`) then looks like:
```ini
[conll03]
datasets = conll03
model_name_or_path = t5-base
num_train_epochs = 10
max_seq_length = 256
max_seq_length_eval = 512
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
do_train = True
do_eval = True
do_predict = True
gradient_accumulation_steps = 8
```
It took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset.
# Evaluation
On the development set, the following evaluation results could be achieved:
```json
{
"entity_precision": 0.9536446086664427,
"entity_recall": 0.9555705149781218,
"entity_f1": 0.9546065904505716,
"entity_precision_no_type": 0.9773261672824992,
"entity_recall_no_type": 0.9792998990238977,
"entity_f1_no_type": 0.9783120376597176
}
```
The evaluation results on the test set looks like:
```json
{
"entity_precision": 0.912182296231376,
"entity_recall": 0.9213881019830028,
"entity_f1": 0.9167620893155995,
"entity_precision_no_type": 0.953900087642419,
"entity_recall_no_type": 0.9635269121813032,
"entity_f1_no_type": 0.9586893332158901
}
```
To summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%.
# License
The models is licensed under [MIT](https://choosealicense.com/licenses/mit/).
# Acknowledgments
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/convbert-base-turkish-cased | dbmdz | 2023-09-06T22:19:16Z | 271 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"convbert",
"feature-extraction",
"tr",
"arxiv:2008.02496",
"license:mit",
"endpoints_compatible",
"region:us"
]
| feature-extraction | 2022-03-02T23:29:05Z | ---
language: tr
license: mit
---
# 🤗 + 📚 dbmdz Turkish ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ConvBERT model for Turkish 🎉
# 🇹🇷 ConvBERTurk
ConvBERTurk is a community-driven cased ConvBERT model for Turkish.
In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented
in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128
sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-32!
## Usage
With Transformers >= 4.3 our cased ConvBERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/convbert-base-turkish-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
## Results
For results on PoS tagging, NER and Question Answering downstream tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our DBMDZ BERT models in general, just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/bert-mini-historic-multilingual-cased | dbmdz | 2023-09-06T22:19:11Z | 865 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/german-gpt2-faust | dbmdz | 2023-09-06T22:18:42Z | 144 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:05Z | ---
language: de
widget:
- text: "Schon um die Liebe"
license: mit
---
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
**Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it.
More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation.
## German GPT-2 fine-tuned on Faust I and II
We fine-tuned our German GPT-2 model on "Faust I and II" from Johann Wolfgang Goethe. These texts can be obtained from [Deutsches Textarchiv (DTA)](http://www.deutschestextarchiv.de/book/show/goethe_faust01_1808). We use the "normalized" version of both texts (to avoid out-of-vocabulary problems with e.g. "ſ")
Fine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!).
We also open source this fine-tuned model. Text can be generated with:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="dbmdz/german-gpt2-faust",
tokenizer="dbmdz/german-gpt2-faust")
text = pipe("Schon um die Liebe", max_length=100)[0]["generated_text"]
print(text)
```
and could output:
```
Schon um die Liebe bitte ich, Herr! Wer mag sich die dreifach Ermächtigen?
Sei mir ein Held!
Und daß die Stunde kommt spreche ich nicht aus.
Faust (schaudernd).
Den schönen Boten finde' ich verwirrend;
```
# License
All models are licensed under [MIT](LICENSE).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/stefan-it/german-gpt/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/bert-base-italian-xxl-uncased | dbmdz | 2023-09-06T22:18:38Z | 80,724 | 10 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: it
license: mit
datasets:
- wikipedia
---
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/bert-base-historic-multilingual-64k-td-cased | dbmdz | 2023-09-06T22:16:56Z | 283 | 1 | transformers | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:2205.15575",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-26T11:52:36Z | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition
More information about our hmBERT model can be found in our new paper:
["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
Details about the pretraining are coming soon.
# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗 |
dbmdz/electra-base-italian-xxl-cased-generator | dbmdz | 2023-09-06T22:16:01Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"electra",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: it
license: mit
datasets:
- wikipedia
---
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
dbmdz/bert-base-historic-multilingual-cased | dbmdz | 2023-09-06T22:15:33Z | 181 | 7 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:2205.15575",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition
More information about our hmBERT model can be found in our new paper:
["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Smaller Models
We have also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗 |
dbmdz/bert-tiny-historic-multilingual-cased | dbmdz | 2023-09-06T22:11:18Z | 219 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: multilingual
license: mit
widget:
- text: "and I cannot conceive the reafon why [MASK] hath"
- text: "Täkäläinen sanomalehdistö [MASK] erit - täin"
- text: "Det vore [MASK] häller nödvändigt att be"
- text: "Comme, à cette époque [MASK] était celle de la"
- text: "In [MASK] an atmosphärischen Nahrungsmitteln"
---
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
luffycodes/noether-vicuna-13b | luffycodes | 2023-09-06T22:10:27Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.12244",
"arxiv:2306.08568",
"arxiv:2308.09583",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-06T22:09:38Z | ---
license: llama2
duplicated_from: WizardLM/WizardMath-13B-V1.0
---
## duplicated_from: WizardLM/WizardMath-13B-V1.0
## WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (RLEIF)
<p align="center">
🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
</p>
<p align="center">
👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
</p>
| Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License |
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | 51.6 | [Demo](http://47.103.63.15:50088/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> |
| WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> |
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
</font>
**Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath
**Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128
**Discord**: https://discord.gg/VZjjHtWrKs
## Comparing WizardMath-V1.0 with Other LLMs.
🔥 The following figure shows that our **WizardMath-70B-V1.0 attains the fifth position in this benchmark**, surpassing ChatGPT (81.6 vs. 80.8) , Claude Instant (81.6 vs. 80.9), PaLM 2 540B (81.6 vs. 80.7).
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/nlpxucan/WizardLM/main/WizardMath/images/wizardmath_gsm8k.png" alt="WizardMath" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
</p>
❗<b>Note for model system prompts usage:</b>
Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**.
**Default version:**
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
```
**CoT Version:** (❗For the **simple** math questions, we do NOT recommend to use the CoT prompt.)
```
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
```
## Inference WizardMath Demo Script
We provide the WizardMath inference demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
❗<b>To commen concern about dataset:</b>
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.
Thank you for your understanding.
## Citation
Please cite the repo if you use the data, method or code in this repo.
```
@article{luo2023wizardmath,
title={WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct},
author={Luo, Haipeng and Sun, Qingfeng and Xu, Can and Zhao, Pu and Lou, Jianguang and Tao, Chongyang and Geng, Xiubo and Lin, Qingwei and Chen, Shifeng and Zhang, Dongmei},
journal={arXiv preprint arXiv:2308.09583},
year={2023}
}
```
|
mcQuantum/swin-tiny-patch4-window7-224-finetuned-eurosat | mcQuantum | 2023-09-06T22:05:49Z | 212 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-06T21:45:42Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9725925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0792
- Accuracy: 0.9726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.226 | 1.0 | 190 | 0.1455 | 0.9519 |
| 0.199 | 2.0 | 380 | 0.0931 | 0.9681 |
| 0.1476 | 3.0 | 570 | 0.0792 | 0.9726 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
asas-ai/bloom_1B_4bit_qlora_arc | asas-ai | 2023-09-06T22:02:59Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:asas-ai/bloom_1B_8bit",
"base_model:finetune:asas-ai/bloom_1B_8bit",
"region:us"
]
| null | 2023-09-06T22:02:34Z | ---
base_model: asas-ai/bloom_1B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_1B_4bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_1B_4bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_1B_8bit](https://huggingface.co/asas-ai/bloom_1B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SaiedAlshahrani/bloom_1B_4bit_qlora_arc | SaiedAlshahrani | 2023-09-06T22:02:37Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_1B_8bit",
"base_model:finetune:asas-ai/bloom_1B_8bit",
"region:us"
]
| null | 2023-09-06T21:10:04Z | ---
base_model: asas-ai/bloom_1B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_1B_4bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_1B_4bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_1B_8bit](https://huggingface.co/asas-ai/bloom_1B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Rewire/XTC | Rewire | 2023-09-06T22:00:42Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-24T10:33:17Z | (COMING SOON!)
MULTILINGUAL HATECHECK: Functional Tests for Multilingual Hate Speech Detection Models |
PHL99/q-Taxi-v3-agent2 | PHL99 | 2023-09-06T21:32:41Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T21:21:02Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-agent2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PHL99/q-Taxi-v3-agent2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abdel1311/ppo-LunarLander-v2 | abdel1311 | 2023-09-06T21:28:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T21:28:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.80 +/- 24.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
isabellazhou/distilbert-base-uncased-finetuned-mrpc | isabellazhou | 2023-09-06T21:26:51Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-05T22:17:56Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.7034313725490197
- name: F1
type: f1
value: 0.819672131147541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5325
- Accuracy: 0.7034
- F1: 0.8197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 115 | 0.5325 | 0.7034 | 0.8197 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
k3smith/my_awesome_model | k3smith | 2023-09-06T21:12:09Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-06T20:54:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2988
- Accuracy: 0.9146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3705 | 1.0 | 1069 | 0.3322 | 0.8983 |
| 0.2463 | 2.0 | 2138 | 0.2988 | 0.9146 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PHL99/q-Taxi-v3 | PHL99 | 2023-09-06T21:09:00Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T21:08:59Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PHL99/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PHL99/q-FrozenLake-v1-4x4-noSlippery | PHL99 | 2023-09-06T21:06:54Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T21:06:52Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PHL99/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shawhin/distilbert-base-uncased-lora-text-classification | shawhin | 2023-09-06T21:06:31Z | 0 | 2 | null | [
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
]
| null | 2023-09-06T15:42:36Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0684
- Accuracy: {'accuracy': 0.879}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.4266 | {'accuracy': 0.87} |
| 0.4232 | 2.0 | 500 | 0.4260 | {'accuracy': 0.88} |
| 0.4232 | 3.0 | 750 | 0.5071 | {'accuracy': 0.885} |
| 0.2213 | 4.0 | 1000 | 0.7424 | {'accuracy': 0.875} |
| 0.2213 | 5.0 | 1250 | 0.7885 | {'accuracy': 0.881} |
| 0.067 | 6.0 | 1500 | 0.9312 | {'accuracy': 0.872} |
| 0.067 | 7.0 | 1750 | 0.9669 | {'accuracy': 0.874} |
| 0.0238 | 8.0 | 2000 | 1.0856 | {'accuracy': 0.874} |
| 0.0238 | 9.0 | 2250 | 1.0637 | {'accuracy': 0.88} |
| 0.0066 | 10.0 | 2500 | 1.0684 | {'accuracy': 0.879} |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.2
|
Outrageous-add/v8_ll2 | Outrageous-add | 2023-09-06T21:06:06Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-06T21:05:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
ShreyasM/ppo-Huggy2 | ShreyasM | 2023-09-06T21:00:11Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-06T20:59:58Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ShreyasM/ppo-Huggy2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Benjaminabruzzo/Taxi-v3 | Benjaminabruzzo | 2023-09-06T20:59:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T20:59:08Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Benjaminabruzzo/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rohitdavas/ppo-Huggy | rohitdavas | 2023-09-06T20:55:59Z | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-09-06T20:55:54Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rohitdavas/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
asas-ai/bloom_1B_8bit_qlora_arc | asas-ai | 2023-09-06T20:52:30Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:asas-ai/bloom_1B_8bit",
"base_model:finetune:asas-ai/bloom_1B_8bit",
"region:us"
]
| null | 2023-09-06T20:52:07Z | ---
base_model: asas-ai/bloom_1B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_1B_8bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_1B_8bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_1B_8bit](https://huggingface.co/asas-ai/bloom_1B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SaiedAlshahrani/bloom_1B_8bit_qlora_arc | SaiedAlshahrani | 2023-09-06T20:52:09Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_1B_8bit",
"base_model:finetune:asas-ai/bloom_1B_8bit",
"region:us"
]
| null | 2023-09-06T19:59:24Z | ---
base_model: asas-ai/bloom_1B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_1B_8bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_1B_8bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_1B_8bit](https://huggingface.co/asas-ai/bloom_1B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
YiYiXu/kandinsky_prior_pokemon | YiYiXu | 2023-09-06T20:49:14Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"kandinsky",
"text-to-image",
"dataset:lambdalabs/pokemon-blip-captions",
"base_model:kandinsky-community/kandinsky-2-2-prior",
"base_model:finetune:kandinsky-community/kandinsky-2-2-prior",
"license:creativeml-openrail-m",
"diffusers:KandinskyV22PriorPipeline",
"region:us"
]
| text-to-image | 2023-09-06T07:05:43Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-prior
datasets:
- lambdalabs/pokemon-blip-captions
tags:
- kandinsky
- text-to-image
- diffusers
inference: true
---
# Finetuning - YiYiXu/kandinsky_prior_pokemon
This pipeline was finetuned from **kandinsky-community/kandinsky-2-2-prior** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A robot pokemon, 4k photo']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("YiYiXu/kandinsky_prior_pokemon", torch_dtype=torch.float16)
pipe_t2i = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
prompt = "A robot pokemon, 4k photo"
image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple()
image = pipe_t2i(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 13
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 1
* Image resolution: 768
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/yiyixu/text2image-fine-tune/runs/pxc1exfh).
|
luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple | luffycodes | 2023-09-06T20:24:13Z | 1,488 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"economics",
"chatgpt",
"vicuna",
"tutorbot",
"its",
"arxiv:2305.13272",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-21T19:10:41Z | ---
license: llama2
tags:
- economics
- chatgpt
- llama
- vicuna
- tutorbot
- its
---
# Nash Model Card
## Github details
Training of Nash (Economics) Model is based code for training the equivalent Spock (Biology) model.
Please checkout the repo: https://github.com/luffycodes/Tutorbot-Spock-Bio.
## Model details
**Model type:**
Nash is an open-source educational tutoring chatbot trained by fine-tuning LLaMA and Vicuna model on synthetic student-tutorbot conversations generated using a specialized prompt.
**Model date:**
Nash was trained between July 2023 and August 2023.
**Organizations developing the model:**
The Nash team with members from Rice University and OpenStax.
## Training dataset
700 conversations generated using a [specialized prompt](https://github.com/luffycodes/Tutorbot-Spock-Bio/blob/main/prompts/conversation_gen/v3.txt) from GPT-4 based on OpenStax Economics, Microeconomics, and Macroeconomics textbooks.
**Paper or resources for more information:**
https://arxiv.org/abs/2305.13272
**Code or resources for more information:**
Training on Nash is based on:
https://github.com/luffycodes/Tutorbot-Spock-Bio
## Use Policy
Since the model is derivate of Llama model, please abide by Llama use policy [here](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/blob/main/USE_POLICY.md)
and [Llama-Responsible-Use-Guide](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/blob/main/Responsible-Use-Guide.pdf).
**Ethical Considerations, License and Limitations:**
Similarly, since the model is derivate of Llama model, same ethical considers, license and limitations as Llama apply.
**Out-of-scope Uses:**
Similarly, use in any manner that violates applicable laws or regulations (including trade compliance laws).
Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
"Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model."
## LLM Performance based on [huggingface LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|||Average|ARC|HellaSwag|MMLU|TruthfulQA|
|---|---|---|---|---|---|---|
|this model (fine-tuned on vicuna-13b-v1.5)|13B|61.8 |59.13 |80.64 |56.12 | 51.29 |
|lmsys/vicuna-13b-v1.5|13B|61.63 |57.08 |81.24 |56.67 |51.51 |
|meta-llama/Llama-2-13b-chat-hf|13B|59.93|59.04|81.94|54.64|44.12|
If you use this work, please cite:
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles
https://arxiv.org/abs/2305.13272
```
@misc{sonkar2023class,
title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles},
author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
year={2023},
eprint={2305.13272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ckandemir/cat | ckandemir | 2023-09-06T20:24:07Z | 18 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-08-21T19:09:34Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - ckandemir/cat
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
abeiler/goatV9-chat-QLORA-Merged | abeiler | 2023-09-06T20:16:28Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2023-09-06T01:00:28Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: goatV9-chat-QLORA-Merged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goatV9-chat-QLORA-Merged
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4856 | 0.16 | 200 | 0.4700 |
| 0.4337 | 0.31 | 400 | 0.4252 |
| 0.3998 | 0.47 | 600 | 0.4071 |
| 0.4126 | 0.63 | 800 | 0.3967 |
| 0.421 | 0.79 | 1000 | 0.3920 |
| 0.4018 | 0.94 | 1200 | 0.3902 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PvDeep/pixel_f2 | PvDeep | 2023-09-06T20:10:53Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:other",
"region:us"
]
| null | 2023-09-06T19:58:17Z | ---
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Terps/distilbert-base-uncased-finetuned-imdb | Terps | 2023-09-06T19:57:17Z | 125 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-06T19:53:18Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7024 | 1.0 | 157 | 2.4968 |
| 2.5794 | 2.0 | 314 | 2.4281 |
| 2.5354 | 3.0 | 471 | 2.4509 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
google/deplot | google | 2023-09-06T19:53:17Z | 8,323 | 287 | transformers | [
"transformers",
"pytorch",
"safetensors",
"pix2struct",
"image-text-to-text",
"visual-question-answering",
"en",
"fr",
"ro",
"de",
"multilingual",
"arxiv:2212.10505",
"license:apache-2.0",
"region:us"
]
| visual-question-answering | 2023-04-03T11:05:38Z | ---
language:
- en
- fr
- ro
- de
- multilingual
inference: false
pipeline_tag: visual-question-answering
license: apache-2.0
---
# Model card for DePlot
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/deplot_architecture.png"
alt="drawing" width="600"/>
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Using the model](#using-the-model)
2. [Contribution](#contribution)
3. [Citation](#citation)
# TL;DR
The abstract of the paper states that:
> Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
# Using the model
You can run a prediction by querying an input image together with a question as follows:
```python
from transformers import Pix2StructProcessor, Pix2StructForConditionalGeneration
import requests
from PIL import Image
processor = Pix2StructProcessor.from_pretrained('google/deplot')
model = Pix2StructForConditionalGeneration.from_pretrained('google/deplot')
url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt")
predictions = model.generate(**inputs, max_new_tokens=512)
print(processor.decode(predictions[0], skip_special_tokens=True))
```
# Converting from T5x to huggingface
You can use the [`convert_pix2struct_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pix2struct/convert_pix2struct_original_pytorch_to_hf.py) script as follows:
```bash
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --is_vqa
```
if you are converting a large model, run:
```bash
python convert_pix2struct_checkpoint_to_pytorch.py --t5x_checkpoint_path PATH_TO_T5X_CHECKPOINTS --pytorch_dump_path PATH_TO_SAVE --use-large --is_vqa
```
Once saved, you can push your converted model with the following snippet:
```python
from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor
model = Pix2StructForConditionalGeneration.from_pretrained(PATH_TO_SAVE)
processor = Pix2StructProcessor.from_pretrained(PATH_TO_SAVE)
model.push_to_hub("USERNAME/MODEL_NAME")
processor.push_to_hub("USERNAME/MODEL_NAME")
```
# Contribution
This model was originally contributed by Fangyu Liu, Julian Martin Eisenschlos et al. and added to the Hugging Face ecosystem by [Younes Belkada](https://huggingface.co/ybelkada).
# Citation
If you want to cite this work, please consider citing the original paper:
```
@misc{liu2022deplot,
title={DePlot: One-shot visual language reasoning by plot-to-table translation},
author={Liu, Fangyu and Eisenschlos, Julian Martin and Piccinno, Francesco and Krichene, Syrine and Pang, Chenxi and Lee, Kenton and Joshi, Mandar and Chen, Wenhu and Collier, Nigel and Altun, Yasemin},
year={2022},
eprint={2212.10505},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
facebook/encodec_48khz | facebook | 2023-09-06T19:51:48Z | 16,693 | 29 | transformers | [
"transformers",
"pytorch",
"safetensors",
"encodec",
"feature-extraction",
"arxiv:2210.13438",
"license:mit",
"region:us"
]
| feature-extraction | 2023-06-12T16:10:51Z | ---
inference: false
license: mit
---

# Model Card for EnCodec
This model card provides details and information about EnCodec, a state-of-the-art real-time audio codec developed by Meta AI.
## Model Details
### Model Description
EnCodec is a high-fidelity audio codec leveraging neural networks. It introduces a streaming encoder-decoder architecture with quantized latent space, trained in an end-to-end fashion.
The model simplifies and speeds up training using a single multiscale spectrogram adversary that efficiently reduces artifacts and produces high-quality samples.
It also includes a novel loss balancer mechanism that stabilizes training by decoupling the choice of hyperparameters from the typical scale of the loss.
Additionally, lightweight Transformer models are used to further compress the obtained representation while maintaining real-time performance.
- **Developed by:** Meta AI
- **Model type:** Audio Codec
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/facebookresearch/encodec)
- **Paper:** [EnCodec: End-to-End Neural Audio Codec](https://arxiv.org/abs/2210.13438)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
EnCodec can be used directly as an audio codec for real-time compression and decompression of audio signals.
It provides high-quality audio compression and efficient decoding. The model was trained on various bandwiths, which can be specified when encoding (compressing) and decoding (decompressing).
Two different setup exist for EnCodec:
- Non-streamable: the input audio is split into chunks of 1 seconds, with an overlap of 10 ms, which are then encoded.
- Streamable: weight normalizationis used on the convolution layers, and the input is not split into chunks but rather padded on the left.
### Downstream Use
EnCodec can be fine-tuned for specific audio tasks or integrated into larger audio processing pipelines for applications such as speech generation,
music generation, or text to speech tasks.
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## How to Get Started with the Model
Use the following code to get started with the EnCodec model using a dummy example from the LibriSpeech dataset (~9MB). First, install the required Python packages:
```
pip install --upgrade pip
pip install --upgrade datasets[audio]
pip install git+https://github.com/huggingface/transformers.git@main
```
Then load an audio sample, and run a forward pass of the model:
```python
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
# load a demonstration datasets
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
# load the model + processor (for pre-processing the audio)
model = EncodecModel.from_pretrained("facebook/encodec_48khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_48khz")
# cast the audio data to the correct sampling rate for the model
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[0]["audio"]["array"]
# pre-process the inputs
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
# explicitly encode then decode the audio inputs
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
# or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
```
## Training Details
The model was trained for 300 epochs, with one epoch being 2,000 updates with the Adam optimizer with a batch size of 64 examples of 1 second each, a learning rate of 3 · 10−4
, β1 = 0.5, and β2 = 0.9. All the models are traind using 8 A100 GPUs.
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- For speech:
- DNS Challenge 4
- [Common Voice](https://huggingface.co/datasets/common_voice)
- For general audio:
- [AudioSet](https://huggingface.co/datasets/Fhrozen/AudioSet2K22)
- [FSD50K](https://huggingface.co/datasets/Fhrozen/FSD50k)
- For music:
- [Jamendo dataset](https://huggingface.co/datasets/rkstgr/mtg-jamendo)
They used four different training strategies to sample for these datasets:
- (s1) sample a single source from Jamendo with probability 0.32;
- (s2) sample a single source from the other datasets with the same probability;
- (s3) mix two sources from all datasets with a probability of 0.24;
- (s4) mix three sources from all datasets except music with a probability of 0.12.
The audio is normalized by file and a random gain between -10 and 6 dB id applied.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Subjectif metric for restoration:
This models was evalutated using the MUSHRA protocol (Series, 2014), using both a hidden reference and a low anchor. Annotators were recruited using a
crowd-sourcing platform, in which they were asked to rate the perceptual quality of the provided samples in
a range between 1 to 100. They randomly select 50 samples of 5 seconds from each category of the the test set
and force at least 10 annotations per samples. To filter noisy annotations and outliers we remove annotators
who rate the reference recordings less then 90 in at least 20% of the cases, or rate the low-anchor recording
above 80 more than 50% of the time.
### Objective metric for restoration:
The ViSQOL()ink) metric was used together with the Scale-Invariant Signal-to-Noise Ration (SI-SNR) (Luo & Mesgarani, 2019;
Nachmani et al., 2020; Chazan et al., 2021).
### Results
The results of the evaluation demonstrate the superiority of EnCodec compared to the baselines across different bandwidths (1.5, 3, 6, and 12 kbps).
When comparing EnCodec with the baselines at the same bandwidth, EnCodec consistently outperforms them in terms of MUSHRA score.
Notably, EnCodec achieves better performance, on average, at 3 kbps compared to Lyra-v2 at 6 kbps and Opus at 12 kbps.
Additionally, by incorporating the language model over the codes, it is possible to achieve a bandwidth reduction of approximately 25-40%.
For example, the bandwidth of the 3 kbps model can be reduced to 1.9 kbps.
#### Summary
EnCodec is a state-of-the-art real-time neural audio compression model that excels in producing high-fidelity audio samples at various sample rates and bandwidths.
The model's performance was evaluated across different settings, ranging from 24kHz monophonic at 1.5 kbps to 48kHz stereophonic, showcasing both subjective and
objective results. Notably, EnCodec incorporates a novel spectrogram-only adversarial loss, effectively reducing artifacts and enhancing sample quality.
Training stability and interpretability were further enhanced through the introduction of a gradient balancer for the loss weights.
Additionally, the study demonstrated that a compact Transformer model can be employed to achieve an additional bandwidth reduction of up to 40% without compromising
quality, particularly in applications where low latency is not critical (e.g., music streaming).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{défossez2022high,
title={High Fidelity Neural Audio Compression},
author={Alexandre Défossez and Jade Copet and Gabriel Synnaeve and Yossi Adi},
year={2022},
eprint={2210.13438},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
``` |
CyberHarem/focalors_genshin | CyberHarem | 2023-09-06T19:51:03Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/focalors_genshin",
"license:mit",
"region:us"
]
| text-to-image | 2023-08-12T03:24:56Z | ---
license: mit
datasets:
- CyberHarem/focalors_genshin
pipeline_tag: text-to-image
tags:
- art
---
# Lora of focalors_genshin
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 6720, you need to download `6720/focalors_genshin.pt` as the embedding and `6720/focalors_genshin.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 6720**, with the score of 0.993. The trigger words are:
1. `focalors_genshin`
2. `blue_eyes, hat, blue_hair, white_hair, hair_between_eyes, bangs, smile, bow, ahoge, multicolored_hair, long_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7200 | 0.988 | [Download](7200/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7200/previews/nude.png) | [<NSFW, click to see>](7200/previews/nude2.png) |  |  |
| **6720** | **0.993** | [**Download**](6720/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6720/previews/nude.png) | [<NSFW, click to see>](6720/previews/nude2.png) |  |  |
| 6240 | 0.991 | [Download](6240/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5760 | 0.987 | [Download](5760/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5280 | 0.982 | [Download](5280/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4800 | 0.988 | [Download](4800/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4800/previews/nude.png) | [<NSFW, click to see>](4800/previews/nude2.png) |  |  |
| 4320 | 0.987 | [Download](4320/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4320/previews/nude.png) | [<NSFW, click to see>](4320/previews/nude2.png) |  |  |
| 3840 | 0.989 | [Download](3840/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3360 | 0.991 | [Download](3360/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3360/previews/nude.png) | [<NSFW, click to see>](3360/previews/nude2.png) |  |  |
| 2880 | 0.990 | [Download](2880/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2880/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2880/previews/nude.png) | [<NSFW, click to see>](2880/previews/nude2.png) |  |  |
| 2400 | 0.971 | [Download](2400/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2400/previews/nude.png) | [<NSFW, click to see>](2400/previews/nude2.png) |  |  |
| 1920 | 0.955 | [Download](1920/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1440 | 0.988 | [Download](1440/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1440/previews/nude.png) | [<NSFW, click to see>](1440/previews/nude2.png) |  |  |
| 960 | 0.991 | [Download](960/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](960/previews/nude.png) | [<NSFW, click to see>](960/previews/nude2.png) |  |  |
| 480 | 0.992 | [Download](480/focalors_genshin.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](480/previews/nude.png) | [<NSFW, click to see>](480/previews/nude2.png) |  |  |
|
SaiedAlshahrani/bloom_3B_4bit_qlora_arc | SaiedAlshahrani | 2023-09-06T19:49:24Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
]
| null | 2023-09-06T18:29:40Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_4bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_4bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Onutoa/2_5e-3_5_0.5 | Onutoa | 2023-09-06T19:30:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-06T15:52:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 2_5e-3_5_0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_5e-3_5_0.5
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0090
- Accuracy: 0.6991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.0566 | 1.0 | 590 | 1.9336 | 0.6208 |
| 1.8329 | 2.0 | 1180 | 1.8941 | 0.6226 |
| 1.8027 | 3.0 | 1770 | 1.6503 | 0.6043 |
| 1.7269 | 4.0 | 2360 | 1.7276 | 0.5180 |
| 1.7224 | 5.0 | 2950 | 1.7866 | 0.6223 |
| 1.6611 | 6.0 | 3540 | 1.6363 | 0.5988 |
| 1.6862 | 7.0 | 4130 | 1.7201 | 0.5593 |
| 1.5648 | 8.0 | 4720 | 1.7083 | 0.6339 |
| 1.5735 | 9.0 | 5310 | 1.5898 | 0.5991 |
| 1.5494 | 10.0 | 5900 | 1.6325 | 0.6385 |
| 1.5284 | 11.0 | 6490 | 1.6925 | 0.6303 |
| 1.478 | 12.0 | 7080 | 1.7338 | 0.5355 |
| 1.5236 | 13.0 | 7670 | 1.5156 | 0.6394 |
| 1.46 | 14.0 | 8260 | 1.8612 | 0.6321 |
| 1.4214 | 15.0 | 8850 | 1.4616 | 0.6471 |
| 1.4158 | 16.0 | 9440 | 1.5174 | 0.6089 |
| 1.3776 | 17.0 | 10030 | 1.4633 | 0.6278 |
| 1.344 | 18.0 | 10620 | 1.4902 | 0.6135 |
| 1.3644 | 19.0 | 11210 | 1.3897 | 0.6615 |
| 1.3559 | 20.0 | 11800 | 1.3980 | 0.6670 |
| 1.3053 | 21.0 | 12390 | 1.4601 | 0.6651 |
| 1.3035 | 22.0 | 12980 | 1.3306 | 0.6700 |
| 1.3067 | 23.0 | 13570 | 1.3644 | 0.6700 |
| 1.2856 | 24.0 | 14160 | 1.2897 | 0.6691 |
| 1.2743 | 25.0 | 14750 | 1.3909 | 0.6691 |
| 1.2704 | 26.0 | 15340 | 1.2935 | 0.6642 |
| 1.2606 | 27.0 | 15930 | 1.2985 | 0.6425 |
| 1.2164 | 28.0 | 16520 | 1.3179 | 0.6761 |
| 1.2137 | 29.0 | 17110 | 1.2708 | 0.6768 |
| 1.2185 | 30.0 | 17700 | 1.2182 | 0.6862 |
| 1.1769 | 31.0 | 18290 | 1.2422 | 0.6682 |
| 1.1815 | 32.0 | 18880 | 1.3006 | 0.6777 |
| 1.1648 | 33.0 | 19470 | 1.2125 | 0.6862 |
| 1.1368 | 34.0 | 20060 | 1.1602 | 0.6661 |
| 1.1736 | 35.0 | 20650 | 1.1483 | 0.6835 |
| 1.1383 | 36.0 | 21240 | 1.1702 | 0.6896 |
| 1.1406 | 37.0 | 21830 | 1.1127 | 0.6835 |
| 1.1461 | 38.0 | 22420 | 1.1293 | 0.6875 |
| 1.1199 | 39.0 | 23010 | 1.1855 | 0.6881 |
| 1.0878 | 40.0 | 23600 | 1.1871 | 0.6902 |
| 1.0852 | 41.0 | 24190 | 1.0959 | 0.6936 |
| 1.0873 | 42.0 | 24780 | 1.1361 | 0.6942 |
| 1.0633 | 43.0 | 25370 | 1.0750 | 0.6911 |
| 1.0758 | 44.0 | 25960 | 1.1282 | 0.6645 |
| 1.0446 | 45.0 | 26550 | 1.0763 | 0.6832 |
| 1.0373 | 46.0 | 27140 | 1.0759 | 0.6817 |
| 1.0318 | 47.0 | 27730 | 1.0454 | 0.6908 |
| 1.0354 | 48.0 | 28320 | 1.0636 | 0.7031 |
| 1.0276 | 49.0 | 28910 | 1.0394 | 0.6927 |
| 1.0211 | 50.0 | 29500 | 1.0369 | 0.7015 |
| 1.0021 | 51.0 | 30090 | 1.0366 | 0.6865 |
| 0.983 | 52.0 | 30680 | 1.0274 | 0.6960 |
| 1.0137 | 53.0 | 31270 | 1.0278 | 0.7028 |
| 0.9825 | 54.0 | 31860 | 1.0339 | 0.6899 |
| 0.9792 | 55.0 | 32450 | 1.0142 | 0.6969 |
| 0.9937 | 56.0 | 33040 | 1.0140 | 0.7024 |
| 0.9755 | 57.0 | 33630 | 1.0173 | 0.6972 |
| 0.9517 | 58.0 | 34220 | 1.0078 | 0.7 |
| 0.988 | 59.0 | 34810 | 1.0116 | 0.7018 |
| 0.9702 | 60.0 | 35400 | 1.0090 | 0.6991 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
pagebrain/epicphotogasm-v1 | pagebrain | 2023-09-06T19:26:48Z | 33 | 0 | diffusers | [
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-06T19:14:25Z | ---
license: creativeml-openrail-m
---
|
shubhamagarwal92/a2c-PandaReachDense-v2 | shubhamagarwal92 | 2023-09-06T19:25:56Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-07T04:49:50Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.36 +/- 0.22
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
ali26sami/finetuning-sentiment-model-3000-samples | ali26sami | 2023-09-06T19:23:39Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-06T19:17:04Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
facebook/detr-resnet-101-dc5 | facebook | 2023-09-06T19:19:43Z | 163,218 | 17 | transformers | [
"transformers",
"pytorch",
"safetensors",
"detr",
"object-detection",
"dataset:coco",
"arxiv:2005.12872",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- object-detection
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-dc5')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves an AP (average precision) of **44.9** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
kimnguyenwork/rl_course_vizdoom_health_gathering_supreme | kimnguyenwork | 2023-09-06T19:16:58Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T19:16:50Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.45 +/- 5.01
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kimnguyenwork/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
nanom/bert_adaptation_peppa_pig | nanom | 2023-09-06T19:09:31Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-06T18:58:19Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
widget:
- text: ¡Hola, soy Peppa [MASK]!.
example_title: Example 1
- text: "[MASK], puedes decir dinosarurio?."
example_title: Example 2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0118 | 1.0 | 35 | 3.3649 |
| 2.8508 | 2.0 | 70 | 2.6014 |
| 2.537 | 3.0 | 105 | 2.3486 |
| 2.3814 | 4.0 | 140 | 2.3938 |
| 2.2644 | 5.0 | 175 | 2.2629 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
prajwalJumde/test | prajwalJumde | 2023-09-06T19:01:38Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-06T19:01:16Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: funsd-layoutlmv3
type: funsd-layoutlmv3
config: funsd
split: test
args: funsd
metrics:
- name: Precision
type: precision
value: 0.8925979680696662
- name: Recall
type: recall
value: 0.9165424739195231
- name: F1
type: f1
value: 0.9044117647058824
- name: Accuracy
type: accuracy
value: 0.86009746820397
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6509
- Precision: 0.8926
- Recall: 0.9165
- F1: 0.9044
- Accuracy: 0.8601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.33 | 100 | 0.7445 | 0.7475 | 0.7869 | 0.7667 | 0.7630 |
| No log | 2.67 | 200 | 0.5447 | 0.8075 | 0.8793 | 0.8419 | 0.8194 |
| No log | 4.0 | 300 | 0.5183 | 0.8425 | 0.8957 | 0.8683 | 0.8418 |
| No log | 5.33 | 400 | 0.5603 | 0.8281 | 0.8952 | 0.8603 | 0.8307 |
| 0.5735 | 6.67 | 500 | 0.5571 | 0.8535 | 0.9001 | 0.8762 | 0.8376 |
| 0.5735 | 8.0 | 600 | 0.5647 | 0.8824 | 0.9096 | 0.8958 | 0.8536 |
| 0.5735 | 9.33 | 700 | 0.5896 | 0.8802 | 0.9121 | 0.8958 | 0.8547 |
| 0.5735 | 10.67 | 800 | 0.6298 | 0.8935 | 0.9165 | 0.9049 | 0.8587 |
| 0.5735 | 12.0 | 900 | 0.6280 | 0.8965 | 0.9210 | 0.9086 | 0.8615 |
| 0.1395 | 13.33 | 1000 | 0.6509 | 0.8926 | 0.9165 | 0.9044 | 0.8601 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
SaleemUllah/distilbert-base-uncased-finetuned-squad | SaleemUllah | 2023-09-06T18:59:05Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-06T18:41:06Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 7 | 4.5971 |
| No log | 2.0 | 14 | 3.4069 |
| No log | 3.0 | 21 | 2.8634 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0+cpu
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nanom/bert_adaptation_vizwiz | nanom | 2023-09-06T18:54:36Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-06T18:51:12Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_vizwiz
results: []
widget:
- text: please [MASK] this shirt.
example_title: Example 1
- text: can you tell me the title of the book? [MASK].
example_title: Example 2
- text: what [MASK] is this?
example_title: Example 3
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_vizwiz
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4046 | 1.0 | 375 | 1.1987 |
| 1.1951 | 2.0 | 750 | 1.0976 |
| 1.0913 | 3.0 | 1125 | 1.1045 |
| 1.0711 | 4.0 | 1500 | 1.0678 |
| 1.0434 | 5.0 | 1875 | 1.0652 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nanom/bert_adaptation_martin_fierro | nanom | 2023-09-06T18:40:43Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-06T18:39:34Z | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_martin_fierro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_martin_fierro
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4082
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.7508 | 1.0 | 29 | 5.2082 |
| 4.7335 | 2.0 | 58 | 4.4594 |
| 4.1562 | 3.0 | 87 | 4.2792 |
| 3.9629 | 4.0 | 116 | 3.9394 |
| 4.2598 | 5.0 | 145 | 4.3763 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
kimnguyenwork/cartpole-v1 | kimnguyenwork | 2023-09-06T18:34:24Z | 0 | 0 | null | [
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T18:16:21Z | ---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 203.30 +/- 123.31
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'kimnguyenwork/cartpole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
asas-ai/bloom_3B_8bit_qlora_arc | asas-ai | 2023-09-06T18:26:50Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:asas-ai/bloom_3B_8bit",
"base_model:finetune:asas-ai/bloom_3B_8bit",
"region:us"
]
| null | 2023-09-06T18:26:14Z | ---
base_model: asas-ai/bloom_3B_8bit
tags:
- generated_from_trainer
model-index:
- name: bloom_3B_8bit_qlora_arc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_3B_8bit_qlora_arc
This model is a fine-tuned version of [asas-ai/bloom_3B_8bit](https://huggingface.co/asas-ai/bloom_3B_8bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 2200
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mc1017/q-FrozenLake-v1-4x4-Slippery | mc1017 | 2023-09-06T18:19:09Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T18:19:06Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.75 +/- 0.43
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mc1017/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SourYuzu/WiiLora_V2 | SourYuzu | 2023-09-06T18:09:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-06T17:59:37Z | ---
license: creativeml-openrail-m
---
|
shahkeyush2002/Textile-Defect-Detection | shahkeyush2002 | 2023-09-06T18:06:40Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-06T17:01:23Z | ---
title: Textile Defect
emoji: ⚡
colorFrom: green
colorTo: blue
sdk: streamlit
sdk_version: 1.21.0
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
akanametov/a2c-PandaReachDense-v2 | akanametov | 2023-09-06T17:48:22Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-01-18T13:18:04Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.77 +/- 1.51
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
Suchinthana/MT-5-Sinhala-Wikigen | Suchinthana | 2023-09-06T17:18:56Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"si",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-08-13T09:08:38Z | ---
license: apache-2.0
language:
- si
widget:
- text: 'writeWiki: මානව ආහාර'
- text: 'writeWiki: ගෝලීයකරණය'
- text: 'writeWiki: ජංගම දුරකථනය'
- text: 'writeWiki: ඇස්කිමෝවරු'
- text: 'writeWiki: අනුරාධපුරය'
datasets:
- wikipedia
---
### Fine tuned MT5 base model with Sinhala Wikipedia Dataset
This model is fine tuned with articles from Sinhala Wikipedia for article generation. Used around 10,000 articles for training and fine tuned more than 100 times.
### How to use
We have to use **"writeWiki: "** part at the begining of each prompt.
You can use this model with a pipeline for text generation.
First you might need to install required libraries and import them.
```py
!pip uninstall transformers -y
!pip install transformers
pip install tokenizers sentencepiece
```
Then we might need to restart the runtime either manually or use the below code to end it.
```py
import os
os.kill(os.getpid(), 9)
```
Then we just have to import the tokenizer and run the pipeline:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/mt5-base')
from transformers import pipeline
generator = pipeline(model='Suchinthana/MT5-Sinhala-Wikigen-Experimental', tokenizer=tokenizer)
generator("writeWiki: මානව ආහාර", do_sample=True, max_length=180)
``` |
LogitsAI/Llama-2-7b-chat-hf | LogitsAI | 2023-09-06T17:15:16Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2023-09-06T16:21:33Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
arxiv: 2307.09288
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
holtbui/bert-finetuned-ner | holtbui | 2023-09-06T17:15:08Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-06T17:03:06Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0773 | 1.0 | 1756 | 0.0817 |
| 0.0387 | 2.0 | 3512 | 0.0599 |
| 0.0244 | 3.0 | 5268 | 0.0594 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Thethela/finetune-2-open_llama_3b_v2 | Thethela | 2023-09-06T17:12:23Z | 7 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-06T17:12:21Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
FelipeCasali-USP/hf_hub_example-946006e0-d807-4c7a-8a1f-0659690a005c | FelipeCasali-USP | 2023-09-06T17:07:23Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"region:us"
]
| tabular-classification | 2023-09-06T01:33:25Z | ---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: skops-kwpjljh4.pkl
widget:
structuredData:
Classifier:
- cnpj
- cnpj
- cnpj
Data:
- '89094553000180'
- '56321179000190'
- '71685006000196'
Tag:
- 0
- 0
- 0
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
[More Information Needed]
### Hyperparameters
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-------------------|---------|
| C | 1.0 |
| class_weight | |
| dual | False |
| fit_intercept | True |
| intercept_scaling | 1 |
| l1_ratio | |
| max_iter | 100 |
| multi_class | auto |
| n_jobs | |
| penalty | l2 |
| random_state | |
| solver | lbfgs |
| tol | 0.0001 |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
<style>#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac {color: black;background-color: white;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac pre{padding: 0;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-toggleable {background-color: white;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-estimator:hover {background-color: #d4ebff;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-item {z-index: 1;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-parallel-item:only-child::after {width: 0;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac div.sk-text-repr-fallback {display: none;}</style><div id="sk-b9e1483c-5ec8-4c61-8c6e-d3973f1c75ac" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LogisticRegression()</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f8e07c50-7b86-4299-9336-68f0fb7d60ca" type="checkbox" checked><label for="f8e07c50-7b86-4299-9336-68f0fb7d60ca" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression()</pre></div></div></div></div></div>
## Evaluation Results
[More Information Needed]
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
|
jejel/dreambooth_mrabdel_sdxl | jejel | 2023-09-06T16:50:52Z | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-06T15:24:21Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of mrabdel person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
SourYuzu/WiiLora | SourYuzu | 2023-09-06T16:34:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-06T16:31:50Z | ---
license: creativeml-openrail-m
---
|
Giorgib/bert-finetuned-on-squad | Giorgib | 2023-09-06T16:25:36Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-06T15:36:41Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-on-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-on-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Bugsys0302/Unorganized-LoRA | Bugsys0302 | 2023-09-06T16:19:03Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-07-17T17:34:43Z | ---
license: creativeml-openrail-m
---
|
Benjaminabruzzo/ppo-LunarLander-v2 | Benjaminabruzzo | 2023-09-06T16:16:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-06T16:15:55Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.79 +/- 53.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ori/lama-2-13b-peft-strategyqa-no-retrieval | Ori | 2023-09-06T15:59:13Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
]
| null | 2023-09-06T15:54:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Subsets and Splits