modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ElinLiu/distilbert-base-uncased | ElinLiu | 2023-09-15T14:00:26Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-15T13:49:34Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.9331143796224208
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model was trained from scratch on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1845
- Accuracy: 0.933
- F1: 0.9331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0738 | 1.0 | 250 | 0.1987 | 0.934 | 0.9345 |
| 0.0681 | 2.0 | 500 | 0.1845 | 0.933 | 0.9331 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AmelieSchreiber/esm2_t12_35M_lora_binding_sites_v2_cp2 | AmelieSchreiber | 2023-09-15T13:51:51Z | 9 | 0 | peft | [
"peft",
"token-classification",
"en",
"dataset:AmelieSchreiber/binding_sites_random_split_by_family_550K",
"license:mit",
"region:us"
]
| token-classification | 2023-09-14T02:08:26Z | ---
library_name: peft
license: mit
language:
- en
datasets:
- AmelieSchreiber/binding_sites_random_split_by_family_550K
metrics:
- accuracy
- f1
- precision
- recall
- roc_auc
- matthews_correlation
pipeline_tag: token-classification
---
# ESM-2 for Binding Site Prediction
**This model is overfit.**
## Training procedure
```
Epoch: 2
Training Loss: 0.051500
Validation Loss: 0.417758
Accuracy: 0.951942
Precision: 0.422039
Recall: 0.747092
F1 Score: 0.539379
AUC: 0.853526
MCC: 0.539618
```
### Framework versions
- PEFT 0.5.0 |
LarryAIDraw/clorinde0ax03 | LarryAIDraw | 2023-09-15T13:45:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-15T13:34:22Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/143742/clorindegenshin-impact |
LarryAIDraw/lifiru | LarryAIDraw | 2023-09-15T13:42:58Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-15T13:32:24Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/143029/lifiru-mystic-wiz-black-cat-oror |
bongo2112/sdxl-db-moodewji-v3 | bongo2112 | 2023-09-15T13:42:39Z | 1 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-14T11:37:32Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of moodewjitz man
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
LarryAIDraw/MahoukaMayumi-v1-06 | LarryAIDraw | 2023-09-15T13:41:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-15T13:34:51Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/145499/mahouka-koukou-no-rettousei-mayumi-saegusa-3-outfits |
LarryAIDraw/izayoi_sakuya_touhou | LarryAIDraw | 2023-09-15T13:40:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-15T13:37:54Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/131022/izayoisakuya-touhou |
lexaizero/sloazqakfqy | lexaizero | 2023-09-15T13:32:01Z | 0 | 0 | null | [
"license:cc-by-nc-sa-3.0",
"region:us"
]
| null | 2023-09-15T07:44:27Z | ---
license: cc-by-nc-sa-3.0
---
|
fnlp/SpeechGPT-7B-ma | fnlp | 2023-09-15T13:29:28Z | 47 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2305.11000",
"arxiv:2308.16692",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-14T13:43:03Z | # SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
<a href='https://0nutation.github.io/SpeechGPT.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2305.11000'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> [](https://huggingface.co/datasets/fnlp/SpeechInstruct)
<p align="center">
<img src="Pictures/logo.png" width="20%"> <br>
</p>
## Introduction
SpeechGPT is a large language model with **intrinsic cross-modal conversational abilities**, capable of perceiving and generating multi-model content following human instructions. With discrete speech representations, we first construct **SpeechInstruct**, a large-scale cross-modal speech instruction dataset. Additionally, we employ a three-stage training strategy that includes **modality-adaptation pre-training**, **cross-modal instruction fine-tuning**, and **chain-of-modality instruction fine-tuning**. The experimental results demonstrate that SpeechGPT has an impressive capacity to follow multi-modal human instructions and highlight the potential of handling multiple modalities with one model. <br>
SpeechGPT demos are shown in our [project page](https://0nutation.github.io/SpeechGPT.github.io/). As shown in the demos, SpeechGPT has strong cross-modal instruction-following ability and spoken dialogue ability. SpeechGPT can be **a talking encyclopedia, your personal assistant, your chat partner, a poet, a psychologist and your educational assistant**...
<br>
<br>
<p align="center">
<img src="Pictures/speechgpt-intro.png" width="95%"> <br>
SpeechGPT’s capabilities to tackle multiple cross-modal tasks
</p>
<br>
<br>
<p align="center">
<img src="Pictures/SpeechGPT-main.png" width="95%"> <br>
Left: SpeechInstruct construction process. Right: SpeechGPT model structure
</p>
## Release
- **[2023/9/15]** We released SpeechGPT code and checkpoints and SpeechInstruct dataset.
- **[2023/9/1]** We proposed **SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models**. We released the code and checkpoints of SpeechTokenizer. Checkout the [paper](https://arxiv.org/abs/2308.16692), [demo](https://0nutation.github.io/SpeechTokenizer.github.io/) and [github](https://github.com/ZhangXInFD/SpeechTokenizer).
- **[2023/5/18]** We released **SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities**. We propose SpeechGPT, the first multi-modal LLM capable of perceiving and generating multi-modal contents following multi-modal human instructions. Checkout the [paper](https://arxiv.org/abs/2305.11000) and [demo](https://0nutation.github.io/SpeechGPT.github.io/).
## Table of Contents
- [Open-source list](#open-source-list)
- [Talk with SpeechGPT](#talk-with-speechgpt)
- [Train SpeechGPT](#train-speechgpt)
- [Finetune SpeechGPT](#finetune-speechgpt)
## Open-source list
### Models
- [**SpeechGPT-7B-ma**](https://huggingface.co/fnlp/SpeechGPT-7B-ma): The model obtained after the first-stage modality-adaptation pre-training, which was initialized with LLaMA-7B and further pre-trained on LibriLight speech units.
- [**SpeechGPT-7B-cm**](https://huggingface.co/fnlp/SpeechGPT-7B-cm): The model obtained after the second-stage cross-modal instruction finetuning, which was initialized with SpeechGPT-7B-ma and further finetuned on SpeechInstruct Cross-Modal Instruction set. This is a powerful foundational model that aligns speech and text.
- [**SpeechGPT-7B-com**](https://huggingface.co/fnlp/SpeechGPT-7B-com): The model obtained after the third-stage chain-of-modality instruction finetuning, which was initialized with SpeechGPT-7B-cm and further lora-finetuned on SpeechInstruct Chain-of-Modality Instruction set. This is an adapter-model of SpeechGPT-7B-cm for spoken dialogue.
### Datasets
- [**SpeechInstruct-cross-modal**](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl): The cross-modal instruction set, about 9 million unit-text data pairs tokenzed by mHuBERT from large-scale English ASR datasets. data format:
- [**SpeechInstruct-chain-of-modality**](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/chain_of_modality_instruction.jsonl): The chain-of-thought style instructions for four input-output formats, namely Speech Instruction-Speech Response, Speech Instruction-Text Response, Text Instruction-Speech Response, and Text Instruction-Text Response.
SpeechInstruct-cross-modal data format:
```
[
{
"prefix": "You are an AI assistant whose name is SpeechGPT.\n- SpeechGPT is a intrinsic cross-modal conversational language model that is developed by Fudan University. SpeechGPT can understand and communicate fluently with human through speech or text chosen by the user.\n- It can perceive cross-modal inputs and generate cross-modal outputs.\n",
"plain_text": "[Human]: Try to speak out this sentence, please. This is input: The alchemist rode in front, with the falcon on his shoulder.<eoh> [SpeechGPT]: <sosp><661><588><604><157><596><499><596><106><596><189><63><189><665><991><162><202><393><946><327><905><907><597><660><351><557><794><788><59><754><12><977><877><333><873><835><67><940><118><686><613><169><72><644><553><535><935><101><741><384><173><894><787><380><787><196><555><721><944><250><56><812><222><915><143><390><479><330><435><647><246><650><816><325><506><686><208><613><417><755><193><411><452><111><735><6><735><63><665><644><991><535><271><333><196><918><29><202><393><946><734><390><479><330><776><167><761><907><597><660><351><557><794><75><788><15><366><896><627><168><654><659><177><183><609><710><187><493><361><470><821><59><56><198><912><742><840><431><531><76><668><576><803><791><380><660><325><801><549><366><377><164><309><584><605><193><71><39><eosp><eoa> "
},
]
```
SpeechInstruct-chain-of-modality data format:
```
[
{
"prefix": "You are an AI assistant whose name is SpeechGPT.\n- SpeechGPT is a intrinsic cross-modal conversational language model that is developed by Fudan University. SpeechGPT can understand and communicate fluently with human through speech or text chosen by the user.\n- It can perceive cross-modal inputs and generate cross-modal outputs.\n",
"plain_text": "[Human]: <sosp><661><987><511><732><951><997><111><982><189><63><665><991><535><101><741><173><945><944><503><641><124><565><734><870><290><978><833><238><761><907><430><901><185><403><557><244><583><788><663><969><896><627><143><515><663><969><660><691><251><412><260><41><740><677><253><380><382><268><506><876><417><755><16><819><80><651><80><651><80><987><588><eosp><eoh>. [SpeechGPT]: What is a bad term for poop?; [ta] A bad term for poop is excrement. It is usually used as a polite way to refer to fecal waste.; [ua] <sosp><497><63><264><644><710><823><565><577><154><331><384><173><945><29><244><326><583><728><576><663><969><896><627><143><38><515><663><24><382><251><676><412><260><41><740><677><253><382><268><876><233><878><609><389><771><865><641><124><878><609><423><384><879><487><219><522><589><337><126><119><663><748><12><671><877><377><385><902><819><619><842><419><997><829><111><666><42><277><63><665><644><389><771><685><437><641><124><258><436><139><340><11><59><518><56><948><86><258><436><139><340><347><376><940><118><944><878><173><641><124><362><734><179><961><931><878><609><423><384><879><219><522><866><337><243><935><101><741><822><89><194><630><86><555><105><79><868><220><156><824><998><870><390><422><330><776><663><969><523><105><79><799><220><357><390><479><422><330><776><485><165><86><501><119><716><205><521><787><935><101><741><89><194><664><835><67><940><118><613><417><755><902><415><772><497><eosp><eoa>."
},
]
```
## Talk with SpeechGPT
**Due to limited training data and resources, the performance of the open-source SpeechGPT is currently not optimal. Problems such as task recognition errors and inaccuracies in speech recognition may occur. As this project is primarily an exploration in research, we have not increased the amount of pretraining and sft data or training steps to enhance performance. Our hope is that SpeechGPT can serve as a foundational model to encourage research and exploration in the field of speech language models.**
### Installation
```bash
git clone https://github.com/0nutation/SpeechGPT
cd SpeechGPT
conda create --name SpeechGPT python=3.8
conda activate SpeechGPT
pip install -r requirements.txt
```
### Download
To talk with SpeechGPT, you should download [SpeechGPT-7B-cm](https://huggingface.co/fnlp/SpeechGPT-7B-cm) and [SpeechGPT-7B-com](https://huggingface.co/fnlp/SpeechGPT-7B-com) locally.
You should download mHuBERT model to ```utils/speech2unit/```. Please see [Speech2unit](https://github.com/0nutation/SpeechGPT/utils/speech2unit/README_DATA.md) for details.
```bash
s2u_dir="uitls/speech2unit"
cd ${s2u_dir}
wget https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_en_es_fr_it3.pt
wget https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_en_es_fr_it3_L11_km1000.bin
```
You should download the unit-vocoder to ```utils/vocoder/```. Please see [vocoder](https://github.com/0nutation/SpeechGPT/utils/vocoder/README_DATA.md) for details.
```bash
vocoder_dir="utils/vocoder/"
cd ${vocoder_dir}
wget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/config.json -O config.json
wget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/g_00500000 -O vocoder.pt
```
### CLI Inference
```bash
python3 speechgpt/src/infer/cli_infer.py \
--model-name-or-path "path/to/SpeechGPT-7B-cm" \
--lora-weights "path/to/SpeechGPT-7B-com" \
--s2u-dir "${s2u_dir}" \
--vocoder-dir "${vocoder_dir} \
--output-dir "output"
```
**Notes**
For speech input, you can provide the path to the audio file. For ASR or TTS tasks, you must prefix the speech or text with ```this is input: ```, otherwise, it may be recognized incorrectly.
The speech response will be saved to a ```.wav``` file, and detailed responses will be saved in a JSON file. The paths to these files will be indicated in the response.
Here are some examples of talking with SpeechGPT:
**Textual dialogue example**
```
Please talk with SpeechGPT:
Who is Lebron James?
Response:
Lebron James is an American professional basketball player for the Los Angeles Lakers of the National Basketball Association (NBA). He is considered one of the greatest basketball players of all time and is known for his athleticism, scoring ability, and leadership skills. He is a four-time NBA MVP, a 14-time NBA All-Star, a 13-time All-NBA selection, and a two-time Olympic gold medalist.
Response json is saved in output/responses.json
```
**Spoken dialogue example**
```
Please talk with SpeechGPT:
prompts/0.wav
Transcript: What are the main causes of climate change?
Text response: The main causes of climate change are human activities such as burning fossil fuels, deforestation, and agricultural practices. These activities release greenhouse gases, like carbon dioxide and Methane, into the atmosphere which trap heat and cause the Earth's temperature to rise.
Speech repsonse is saved in output/wav/answer_0.wav
Response json is saved in output/responses.json
```
**ASR example**
```
Please talk with SpeechGPT:
Recognize this speech, this is input: prompts/1.wav
Response:
today is a sunny day.
Response json is saved in output/responses.json
```
**TTS example**
```
Please talk with SpeechGPT:
Read this sentence aloud, this is input: Today is a sunny day.
Response:
<sosp> <661> <987> <520> <982> <681> <982> <681> <982> <681> <982> <681> <982> <189> <63> <662> <79> <868> <220> <196> <166> <549> <822> <89> <194> <633> <14> <855> <183> <609> <389> <771> <865> <641> <124> <362> <734> <742> <98> <519> <26> <204> <280> <668> <167> <104> <650> <179> <961> <428> <950> <82> <165> <196> <166> <549> <822> <89> <194> <458> <726> <603> <819> <651> <133> <651> <133> <186> <133> <186> <133> <186> <511> <186> <511> <eosp>
Speech repsonse is saved in output/wav/answer_1.wav
Response json is saved in output/responses.json
```
### Gradio Web UI
```bash
python3 speechgpt/src/infer/web_infer.py \
--model-name-or-path "path/to/SpeechGPT-7B-cm" \
--lora-weights "path/to/SpeechGPT-7B-com" \
--s2u-dir "${s2u_dir}" \
--vocoder-dir "${vocoder_dir}" \
--output-dir "output/"
```
## Train SpeechGPT
### Stage1: Modality-adaptation Pre-training
First, utilize mHuBERT for discretizing the LibriLight dataset to obtain discrete unit sequences for stage1 training. You can refer to the data processing methods in [Speech2unit](https://github.com/0nutation/SpeechGPT/utils/speech2unit/README_DATA.md).
Second, divide the discrete units into a training set and a development set, and save them in the following format in the files ```data/stage1/train.txt``` and ```data/stage1/dev.txt```:
```
<sosp><189><247><922><991><821><258><485><974><284><466><969><523><196><202><881><331><822><853><432><32><742><98><519><26><204><280><576><384><879><901><555><944><366><641><124><362><734><156><824><462><761><907><430><81><597><716><205><521><470><821><677><355><483><641><124><243><290><978><82><620><915><470><821><576><384><466><398><212><455><931><579><969><778><45><914><445><469><576><803><6><803><791><377><506><835><67><940><613><417><755><237><224><452><121><736><eosp>
<sosp><300><189><63><6><665><991><881><331><6><384><879><945><29><244><583><874><655><837><81><627><545><124><337><850><412><213><260><41><740><797><211><488><961><428><6><196><555><944><873><32><683><700><955><812><328><915><166><250><56><903><86><233><479><330><776><167><104><764><259><921><366><663><432><431><531><976><314><822><89><664><377><611><479><417><eosp>
<sosp><189><735><991><39><565><734><32><742><98><519><26><204><280><668><576><803><791><660><555><233><787><101><741><466><969><219><107><459><491><556><384><733><219><501><445><137><910><523><793><50><981><230><534><321><948><86><116><281><62><462><104><70><918><743><15><212><455><143><836><173><944><958><390><422><66><776><258><436><139><663><432><742><98><519><589><243><126><260><41><444><6><655><764><969><219><727><85><297><700><362><493><6><493><361><393><946><6><470><821><246><655><837><81><969><916><584><819><544><452><158><452><736><eosp>
```
Third, you should download LLaMA 7B(HuggingFace) to ```llama/hf/7B```.
Now you can start stage1 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/ma_pretrain.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
### Stage 2: Cross-modal Instruction Finetuning
You should download [SpeechInstruct Cross-modal Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl) to ```data/stage2/```.
If you want to skip stage1 training, you can download ```SpeechGPT-7B-ma``` to ```output/stage1/```.
Now you can start stage2 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/cm_sft.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
### Stage 3: Chain-of-modality Instruction Finetuning
You should download [SpeechInstruct Chain-of-modality Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/chain_of_modality_instruction.jsonl) to ```data/stage3/```.
If you want to skip stage1 and stage2, you can download ```SpeechGPT-7B-cm``` to ```output/stage2/```.
Now you can start stage3 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/com_sft.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
## Finetune SpeechGPT
```Speech-7B-cm``` is a foundational model with strong alignment between speech and text. We encourage fine-tuning SpeechGPT based on this model.
Step1: prepare your data following the format in [SpeechInstruct Cross-modal Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl).
Step2: download [SpeechGPT-7B-cm](https://huggingface.co/fnlp/SpeechGPT-7B-cm) locally.
Step3: Modify the ```METAROOT```, ```DATAROOT```, and ```OUTROOT``` parameters in the ```scripts/cm_sft.sh``` script to yours and then run it. For LoRA fine-tuning, update the ```METAROOT```, ```DATAROOT```, and ```OUTROOT``` parameters in the ```scripts/com_sft.sh``` script and run it.
## Acknowledgements
- [MOSS](https://github.com/OpenLMLab/MOSS): We use moss-sft-002-data.
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca):The codebase we built upon.
## Citation
If you find SpeechGPT useful for your research and applications, please cite using the BibTex:
```
@misc{zhang2023speechgpt,
title={SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities},
author={Dong Zhang and Shimin Li and Xin Zhang and Jun Zhan and Pengyu Wang and Yaqian Zhou and Xipeng Qiu},
year={2023},
eprint={2305.11000},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
CyberHarem/takagaki_kaede_idolmastercinderellagirls | CyberHarem | 2023-09-15T13:17:44Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/takagaki_kaede_idolmastercinderellagirls",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T13:01:59Z | ---
license: mit
datasets:
- CyberHarem/takagaki_kaede_idolmastercinderellagirls
pipeline_tag: text-to-image
tags:
- art
---
# Lora of takagaki_kaede_idolmastercinderellagirls
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 3500, you need to download `3500/takagaki_kaede_idolmastercinderellagirls.pt` as the embedding and `3500/takagaki_kaede_idolmastercinderellagirls.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 3500**, with the score of 0.989. The trigger words are:
1. `takagaki_kaede_idolmastercinderellagirls`
2. `mole, mole_under_eye, short_hair, blue_eyes, green_eyes, heterochromia, brown_hair, smile, bangs, blush, breasts, collarbone, green_hair, medium_breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7500 | 0.987 | [Download](7500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](7500/previews/pattern_2.png) | [<NSFW, click to see>](7500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](7500/previews/bikini.png) | [<NSFW, click to see>](7500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7500/previews/nude.png) | [<NSFW, click to see>](7500/previews/nude2.png) |  |  |
| 7000 | 0.986 | [Download](7000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](7000/previews/pattern_2.png) | [<NSFW, click to see>](7000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bikini.png) | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6500 | 0.989 | [Download](6500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6500/previews/pattern_2.png) | [<NSFW, click to see>](6500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](6500/previews/bikini.png) | [<NSFW, click to see>](6500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6500/previews/nude.png) | [<NSFW, click to see>](6500/previews/nude2.png) |  |  |
| 6000 | 0.983 | [Download](6000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](6000/previews/pattern_2.png) | [<NSFW, click to see>](6000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](6000/previews/bikini.png) | [<NSFW, click to see>](6000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6000/previews/nude.png) | [<NSFW, click to see>](6000/previews/nude2.png) |  |  |
| 5500 | 0.972 | [Download](5500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5500/previews/pattern_2.png) | [<NSFW, click to see>](5500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](5500/previews/bikini.png) | [<NSFW, click to see>](5500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5500/previews/nude.png) | [<NSFW, click to see>](5500/previews/nude2.png) |  |  |
| 5000 | 0.985 | [Download](5000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](5000/previews/pattern_2.png) | [<NSFW, click to see>](5000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](5000/previews/bikini.png) | [<NSFW, click to see>](5000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5000/previews/nude.png) | [<NSFW, click to see>](5000/previews/nude2.png) |  |  |
| 4500 | 0.985 | [Download](4500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4500/previews/pattern_2.png) | [<NSFW, click to see>](4500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](4500/previews/bikini.png) | [<NSFW, click to see>](4500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4500/previews/nude.png) | [<NSFW, click to see>](4500/previews/nude2.png) |  |  |
| 4000 | 0.973 | [Download](4000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](4000/previews/pattern_2.png) | [<NSFW, click to see>](4000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](4000/previews/bikini.png) | [<NSFW, click to see>](4000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4000/previews/nude.png) | [<NSFW, click to see>](4000/previews/nude2.png) |  |  |
| **3500** | **0.989** | [**Download**](3500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3500/previews/pattern_2.png) | [<NSFW, click to see>](3500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bikini.png) | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 3000 | 0.983 | [Download](3000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](3000/previews/pattern_2.png) | [<NSFW, click to see>](3000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](3000/previews/bikini.png) | [<NSFW, click to see>](3000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3000/previews/nude.png) | [<NSFW, click to see>](3000/previews/nude2.png) |  |  |
| 2500 | 0.977 | [Download](2500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2500/previews/pattern_2.png) | [<NSFW, click to see>](2500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](2500/previews/bikini.png) | [<NSFW, click to see>](2500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2500/previews/nude.png) | [<NSFW, click to see>](2500/previews/nude2.png) |  |  |
| 2000 | 0.964 | [Download](2000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](2000/previews/pattern_2.png) | [<NSFW, click to see>](2000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](2000/previews/bikini.png) | [<NSFW, click to see>](2000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2000/previews/nude.png) | [<NSFW, click to see>](2000/previews/nude2.png) |  |  |
| 1500 | 0.968 | [Download](1500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1500/previews/pattern_2.png) | [<NSFW, click to see>](1500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](1500/previews/bikini.png) | [<NSFW, click to see>](1500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [<NSFW, click to see>](1500/previews/nude2.png) |  |  |
| 1000 | 0.964 | [Download](1000/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](1000/previews/pattern_2.png) | [<NSFW, click to see>](1000/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](1000/previews/bikini.png) | [<NSFW, click to see>](1000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [<NSFW, click to see>](1000/previews/nude2.png) |  |  |
| 500 | 0.936 | [Download](500/takagaki_kaede_idolmastercinderellagirls.zip) |  | [<NSFW, click to see>](500/previews/pattern_2.png) | [<NSFW, click to see>](500/previews/pattern_3.png) |  |  |  |  |  | [<NSFW, click to see>](500/previews/bikini.png) | [<NSFW, click to see>](500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [<NSFW, click to see>](500/previews/nude2.png) |  |  |
|
facebook/opt-125m | facebook | 2023-09-15T13:10:03Z | 6,640,199 | 179 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-05-11T08:25:17Z | ---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-125m")
>>> generator("What are we having for dinner?")
[{'generated_text': 'What are we having for dinner?\nA nice dinner with a friend.\nI'm not sure'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True)
>>> generator("What are we having for dinner?")
[{'generated_text': 'What are we having for dinner?\nCoffee, sausage and cream cheese at Chili's.'}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
facebook/opt-350m | facebook | 2023-09-15T13:09:50Z | 366,875 | 132 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-05-11T08:25:39Z | ---
language: en
inference: false
tags:
- text-generation
license: other
commercial: false
---
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-350m")
>>> generator("What are we having for dinner?")
[{'generated_text': "What are we having for dinner?\nI'm having a steak and a salad.\nI'm""}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True)
>>> generator("What are we having for dinner?")
[{'generated_text': "What are we having for dinner?\n\nWith spring fast approaching, it’s only appropriate"}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"},
{'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'},
{'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'},
{'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'},
{'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'},
{'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'},
{'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'},
{'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'},
{'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
facebook/opt-1.3b | facebook | 2023-09-15T13:09:33Z | 148,998 | 162 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-05-11T08:26:00Z | ---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b")
>>> generator("What are we having for dinner?")
[{'generated_text': 'What are we having for dinner?\nI'm not sure. I'm not a chef. I'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True)
>>> generator("What are we having for dinner?")
[{'generated_text': "What are we having for dinner?\nI have had chicken and rice for lunch. It is delicious"}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': 'The woman worked as a bartender for six months before getting to the job she always dreamed of. She'},
{'generated_text': 'The woman worked as a nanny in a house near The White Horse Farm in the Yorkshire Dales'},
{'generated_text': "The woman worked as a translator at the British Broadcasting Corporation's headquarters and was also an acquaintance of some"},
{'generated_text': 'The woman worked as a secretary and went to school full-time, and also worked as a waitress'},
{'generated_text': 'The woman worked as a beautician with her baby and the little girl is now at the age where'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': 'The man worked as a janitor and the owner of the house he worked at caught him cheating on'},
{'generated_text': 'The man worked as a software engineer.\n\nFor over 10 years, he had been at Amazon'},
{'generated_text': 'The man worked as a car salesman - and was a man of his word to her\nA T'},
{'generated_text': 'The man worked as a private contractor for five years. He went to the Bahamas in the summer of'},
{'generated_text': 'The man worked as a computer systems consultant. After leaving the job, he became a prolific internet hacker'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
CyberHarem/koganeikoito_edomaeelf | CyberHarem | 2023-09-15T13:09:21Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/koganeikoito_edomaeelf",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T12:50:44Z | ---
license: mit
datasets:
- CyberHarem/koganeikoito_edomaeelf
pipeline_tag: text-to-image
tags:
- art
---
# Lora of koganeikoito_edomaeelf
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8320, you need to download `8320/koganeikoito_edomaeelf.pt` as the embedding and `8320/koganeikoito_edomaeelf.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8320**, with the score of 0.866. The trigger words are:
1. `koganeikoito_edomaeelf`
2. `short_hair, black_hair, blush, blue_eyes, ribbon, neck_ribbon, blue_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9600 | 0.793 | [Download](9600/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9600/previews/bikini.png) | [<NSFW, click to see>](9600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9600/previews/nude.png) | [<NSFW, click to see>](9600/previews/nude2.png) |  |  |
| 8960 | 0.826 | [Download](8960/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8960/previews/bikini.png) | [<NSFW, click to see>](8960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8960/previews/nude.png) | [<NSFW, click to see>](8960/previews/nude2.png) |  |  |
| **8320** | **0.866** | [**Download**](8320/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8320/previews/bikini.png) | [<NSFW, click to see>](8320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8320/previews/nude.png) | [<NSFW, click to see>](8320/previews/nude2.png) |  |  |
| 7680 | 0.816 | [Download](7680/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7680/previews/bikini.png) | [<NSFW, click to see>](7680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7680/previews/nude.png) | [<NSFW, click to see>](7680/previews/nude2.png) |  |  |
| 7040 | 0.819 | [Download](7040/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7040/previews/bikini.png) | [<NSFW, click to see>](7040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7040/previews/nude.png) | [<NSFW, click to see>](7040/previews/nude2.png) |  |  |
| 6400 | 0.826 | [Download](6400/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6400/previews/bikini.png) | [<NSFW, click to see>](6400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6400/previews/nude.png) | [<NSFW, click to see>](6400/previews/nude2.png) |  |  |
| 5760 | 0.794 | [Download](5760/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5760/previews/bikini.png) | [<NSFW, click to see>](5760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5760/previews/nude.png) | [<NSFW, click to see>](5760/previews/nude2.png) |  |  |
| 5120 | 0.831 | [Download](5120/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5120/previews/bikini.png) | [<NSFW, click to see>](5120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5120/previews/nude.png) | [<NSFW, click to see>](5120/previews/nude2.png) |  |  |
| 4480 | 0.826 | [Download](4480/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4480/previews/bikini.png) | [<NSFW, click to see>](4480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4480/previews/nude.png) | [<NSFW, click to see>](4480/previews/nude2.png) |  |  |
| 3840 | 0.813 | [Download](3840/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3840/previews/bikini.png) | [<NSFW, click to see>](3840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3840/previews/nude.png) | [<NSFW, click to see>](3840/previews/nude2.png) |  |  |
| 3200 | 0.820 | [Download](3200/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3200/previews/bikini.png) | [<NSFW, click to see>](3200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3200/previews/nude.png) | [<NSFW, click to see>](3200/previews/nude2.png) |  |  |
| 2560 | 0.764 | [Download](2560/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2560/previews/bikini.png) | [<NSFW, click to see>](2560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2560/previews/nude.png) | [<NSFW, click to see>](2560/previews/nude2.png) |  |  |
| 1920 | 0.711 | [Download](1920/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1920/previews/bikini.png) | [<NSFW, click to see>](1920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1920/previews/nude.png) | [<NSFW, click to see>](1920/previews/nude2.png) |  |  |
| 1280 | 0.729 | [Download](1280/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1280/previews/bikini.png) | [<NSFW, click to see>](1280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1280/previews/nude.png) | [<NSFW, click to see>](1280/previews/nude2.png) |  |  |
| 640 | 0.732 | [Download](640/koganeikoito_edomaeelf.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](640/previews/bikini.png) | [<NSFW, click to see>](640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](640/previews/nude.png) | [<NSFW, click to see>](640/previews/nude2.png) |  |  |
|
hoangle/abc | hoangle | 2023-09-15T13:08:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-09-15T13:08:43Z | ---
license: creativeml-openrail-m
---
|
ronghua/opus-mt-en-zh | ronghua | 2023-09-15T13:05:39Z | 4 | 0 | transformers | [
"transformers",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| translation | 2023-09-15T13:03:04Z | ---
language:
- en
- zh
tags:
- translation
license: apache-2.0
---
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
facebook/opt-2.7b | facebook | 2023-09-15T13:04:38Z | 59,954 | 82 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"opt",
"text-generation",
"en",
"arxiv:2205.01068",
"arxiv:2005.14165",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
]
| text-generation | 2022-05-11T08:26:30Z | ---
language: en
inference: false
tags:
- text-generation
- opt
license: other
commercial: false
---
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model="facebook/opt-2.7b")
>>> generator("What are we having for dinner?")
[{'generated_text': 'What are we having for dinner?\nI'm thinking pizza.\nI'm thinking tacos.\n'}]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-2.7b", do_sample=True)
>>> generator("What are we having for dinner?")
[{'generated_text': "What are we having for dinner?\nJust pizza?\nWell, I suppose that would suffice."}]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-2.7b", do_sample=True, num_return_sequences=5)
>>> generator("The woman worked as a")
[{'generated_text': "The woman worked as a security guard at a nursery in the city's eastern district of Samut P"},
{'generated_text': 'The woman worked as a doctor in the Philippines. Officials in China allege she stole the coronavirus'},
{'generated_text': 'The woman worked as a teacher in the city of Krasnodar in south Russia. She'},
{'generated_text': 'The woman worked as a researcher and lecturer at the Russian Academy of Sciences in a laboratory dedicated to the'},
{'generated_text': 'The woman worked as a nanny on a property owned by Mr Fitton-Allen in the city'}]
```
compared to:
```python
>>> from transformers import pipeline, set_seed
>>> set_seed(32)
>>> generator = pipeline('text-generation', model="facebook/opt-2.7b", do_sample=True, num_return_sequences=5)
>>> generator("The man worked as a")
[{'generated_text': "The man worked as a security guard at a retirement home after being hired by the administrator's cousin,"},
{'generated_text': 'The man worked as a doctor in the Philippines.\n\nHe had hoped to work his way back'},
{'generated_text': 'The man worked as a teacher in the city of Krasnodar in south Russia.He'},
{'generated_text': 'The man worked as a researcher and his work on the topic predates the project, by many years'},
{'generated_text': 'The man worked as a chef in a restaurant for 40 years. How could this be so different from'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Akaherz/lora-trained-xl-colab | Akaherz | 2023-09-15T12:51:33Z | 7 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2023-09-15T11:35:45Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Akaherz/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
alperenunlu/PPO-LunarLander-v2 | alperenunlu | 2023-09-15T12:48:05Z | 2 | 4 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-08-07T14:41:48Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.82 +/- 15.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env LunarLander-v2 -orga alperenunlu -f logs/
python -m rl_zoo3.enjoy --algo ppo --env LunarLander-v2 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env LunarLander-v2 -orga alperenunlu -f logs/
python -m rl_zoo3.enjoy --algo ppo --env LunarLander-v2 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env LunarLander-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env LunarLander-v2 -f logs/ -orga alperenunlu
```
## Hyperparameters
```python
OrderedDict([('batch_size', 8),
('clip_range', 0.2),
('ent_coef', 0.0012069732975503813),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 0.0004080379698108855),
('max_grad_norm', 0.5),
('n_envs', 16),
('n_epochs', 10),
('n_steps', 256),
('n_timesteps', 2000000.0),
('policy', 'MlpPolicy'),
('vf_coef', 0.3326356386659747),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ldos/text_shortening_model_v41 | ldos | 2023-09-15T12:47:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large-xsum",
"base_model:finetune:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-14T11:45:14Z | ---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: text_shortening_model_v41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_shortening_model_v41
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7205
- Rouge1: 0.4471
- Rouge2: 0.2088
- Rougel: 0.3939
- Rougelsum: 0.3941
- Bert precision: 0.8647
- Bert recall: 0.8624
- Average word count: 8.6517
- Max word count: 18
- Min word count: 4
- Average token count: 16.5045
- % shortened texts with length > 12: 5.7057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bert precision | Bert recall | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------------:|:-----------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:|
| 2.424 | 1.0 | 73 | 2.2763 | 0.4466 | 0.2286 | 0.3969 | 0.3973 | 0.8628 | 0.8607 | 8.4805 | 17 | 5 | 14.6967 | 3.6036 |
| 1.331 | 2.0 | 146 | 2.1237 | 0.4671 | 0.2385 | 0.4119 | 0.4124 | 0.86 | 0.8702 | 9.7117 | 20 | 4 | 16.7838 | 14.4144 |
| 0.9725 | 3.0 | 219 | 1.9947 | 0.448 | 0.2384 | 0.4004 | 0.4025 | 0.8603 | 0.8627 | 8.8649 | 16 | 5 | 15.8709 | 5.7057 |
| 0.7753 | 4.0 | 292 | 2.2302 | 0.4435 | 0.2201 | 0.3983 | 0.3991 | 0.8653 | 0.8588 | 8.1141 | 16 | 5 | 15.5526 | 1.8018 |
| 0.6017 | 5.0 | 365 | 2.1392 | 0.4293 | 0.2142 | 0.383 | 0.3836 | 0.8593 | 0.8604 | 8.6156 | 17 | 4 | 14.1982 | 3.3033 |
| 0.4911 | 6.0 | 438 | 2.4747 | 0.4166 | 0.1882 | 0.365 | 0.3668 | 0.8582 | 0.8556 | 8.4234 | 14 | 5 | 14.4024 | 3.6036 |
| 0.6947 | 7.0 | 511 | 2.6372 | 0.3894 | 0.1904 | 0.3527 | 0.3534 | 0.8471 | 0.8477 | 8.5165 | 14 | 4 | 16.6607 | 4.2042 |
| 0.5839 | 8.0 | 584 | 2.6038 | 0.3641 | 0.1627 | 0.3272 | 0.3276 | 0.8464 | 0.8402 | 7.7508 | 13 | 4 | 15.2342 | 0.6006 |
| 0.4668 | 9.0 | 657 | 2.7711 | 0.4015 | 0.1904 | 0.3627 | 0.3626 | 0.8537 | 0.8517 | 8.8889 | 17 | 4 | 16.2402 | 3.9039 |
| 0.4539 | 10.0 | 730 | 2.8819 | 0.4 | 0.1903 | 0.353 | 0.3538 | 0.8526 | 0.8519 | 8.6156 | 15 | 5 | 16.1652 | 3.9039 |
| 0.4018 | 11.0 | 803 | 2.8273 | 0.3799 | 0.1764 | 0.3404 | 0.3407 | 0.8432 | 0.8454 | 8.7177 | 17 | 4 | 17.0661 | 3.6036 |
| 0.2764 | 12.0 | 876 | 2.9767 | 0.3888 | 0.1825 | 0.3504 | 0.3509 | 0.8526 | 0.8475 | 8.4354 | 13 | 5 | 16.015 | 2.1021 |
| 0.2338 | 13.0 | 949 | 2.8883 | 0.4184 | 0.202 | 0.3714 | 0.3714 | 0.852 | 0.8585 | 9.3754 | 17 | 5 | 15.8709 | 8.4084 |
| 0.1878 | 14.0 | 1022 | 3.1069 | 0.4302 | 0.1966 | 0.3782 | 0.3791 | 0.8616 | 0.8573 | 8.4324 | 15 | 4 | 16.2492 | 3.3033 |
| 0.1608 | 15.0 | 1095 | 2.8510 | 0.4461 | 0.2151 | 0.392 | 0.3925 | 0.8627 | 0.8625 | 8.7598 | 19 | 4 | 16.1471 | 5.7057 |
| 0.1416 | 16.0 | 1168 | 3.0792 | 0.4246 | 0.1983 | 0.3735 | 0.3735 | 0.8591 | 0.8568 | 8.6637 | 16 | 5 | 16.3303 | 7.5075 |
| 0.1507 | 17.0 | 1241 | 3.2058 | 0.4336 | 0.2016 | 0.379 | 0.3796 | 0.8593 | 0.8589 | 8.9129 | 17 | 5 | 16.6697 | 5.1051 |
| 0.108 | 18.0 | 1314 | 3.0551 | 0.4485 | 0.2248 | 0.4002 | 0.4006 | 0.8645 | 0.8608 | 8.2492 | 14 | 5 | 15.967 | 3.6036 |
| 0.0756 | 19.0 | 1387 | 3.1943 | 0.4439 | 0.2167 | 0.3919 | 0.3925 | 0.8652 | 0.8608 | 8.4865 | 15 | 5 | 15.8919 | 3.9039 |
| 0.104 | 20.0 | 1460 | 3.1156 | 0.4411 | 0.2035 | 0.3894 | 0.3903 | 0.8644 | 0.8612 | 8.5135 | 16 | 5 | 16.4294 | 6.006 |
| 0.0716 | 21.0 | 1533 | 3.4040 | 0.4389 | 0.201 | 0.3824 | 0.3838 | 0.8632 | 0.8614 | 8.7508 | 16 | 4 | 16.5075 | 6.006 |
| 0.0576 | 22.0 | 1606 | 3.4264 | 0.4476 | 0.2104 | 0.3902 | 0.391 | 0.8657 | 0.8629 | 8.5405 | 16 | 4 | 16.4144 | 6.6066 |
| 0.041 | 23.0 | 1679 | 3.5711 | 0.447 | 0.2108 | 0.3931 | 0.393 | 0.8639 | 0.8619 | 8.5976 | 18 | 4 | 16.4264 | 7.2072 |
| 0.0355 | 24.0 | 1752 | 3.6294 | 0.4509 | 0.215 | 0.3981 | 0.3989 | 0.8652 | 0.8632 | 8.6186 | 18 | 4 | 16.4985 | 6.006 |
| 0.0313 | 25.0 | 1825 | 3.7205 | 0.4471 | 0.2088 | 0.3939 | 0.3941 | 0.8647 | 0.8624 | 8.6517 | 18 | 4 | 16.5045 | 5.7057 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
leonhardhennig/copious_ner | leonhardhennig | 2023-09-15T12:46:27Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"TransformerTokenClassificationModel",
"en",
"dataset:copious",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2023-09-15T07:26:32Z | ---
language:
- en
license: "mit"
datasets:
- copious
metrics:
- f1
---
# Model Card for copious_ner
<!-- Provide a quick summary of what the model is/does. [Optional] -->
NER on Copious Biodiversity dataset
# Table of Contents
- [Model Card for copious_ner](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Glossary [optional]](#glossary-optional)
- [More Information [optional]](#more-information-optional)
- [Model Card Authors [optional]](#model-card-authors-optional)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
NER on Copious Biodiversity dataset
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** mit
- **Parent Model:** More information needed
- **Resources for more information:** More information needed
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
More information needed
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details> |
penscola/sentence_sentiments_analysis_bert | penscola | 2023-09-15T12:45:46Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-14T10:33:36Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: sentence_sentiments_analysis_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentence_sentiments_analysis_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3359
- F1-score: 0.9076
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3379 | 1.0 | 2500 | 0.3514 | 0.9024 |
| 0.236 | 2.0 | 5000 | 0.3359 | 0.9076 |
| 0.1406 | 3.0 | 7500 | 0.4492 | 0.9097 |
| 0.0519 | 4.0 | 10000 | 0.5020 | 0.9172 |
| 0.0323 | 5.0 | 12500 | 0.5299 | 0.9198 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
prince99/working | prince99 | 2023-09-15T12:36:30Z | 0 | 0 | null | [
"generated_from_trainer",
"text-generation",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"region:us"
]
| text-generation | 2023-09-14T10:55:59Z | ---
base_model: meta-llama/Llama-2-13b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: working
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3 |
fusersam/Sentiment-Analysis-Model | fusersam | 2023-09-15T12:24:28Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-11T01:26:02Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Sentiment-Analysis-Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sentiment-Analysis-Model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7000
- Accuracy: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8648 | 0.5 | 500 | 0.9848 | 0.703 |
| 0.8367 | 1.0 | 1000 | 0.8764 | 0.683 |
| 0.7815 | 1.5 | 1500 | 0.7792 | 0.7145 |
| 0.7751 | 2.0 | 2000 | 0.7516 | 0.7095 |
| 0.8081 | 2.5 | 2500 | 0.7783 | 0.7055 |
| 0.8142 | 3.0 | 3000 | 0.8125 | 0.688 |
| 0.8497 | 3.5 | 3500 | 0.8383 | 0.6575 |
| 0.8006 | 4.0 | 4000 | 0.7412 | 0.705 |
| 0.7363 | 4.5 | 4500 | 0.7299 | 0.718 |
| 0.7151 | 5.0 | 5000 | 0.7000 | 0.7165 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
judy93536/RoBERTa-perigon-news | judy93536 | 2023-09-15T12:22:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-10T20:42:00Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: RoBERTa-perigon-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-perigon-news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9548
## Model description
The model was pre-trained for a MLM taskusing over 200K financial news articles obtaind from Perigon https://www.goperigon.com/.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.7e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.19
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4872 | 1.0 | 5480 | 1.3355 |
| 1.3571 | 2.0 | 10960 | 1.2488 |
| 1.3078 | 3.0 | 16440 | 1.2144 |
| 1.2425 | 4.0 | 21920 | 1.1634 |
| 1.2035 | 5.0 | 27400 | 1.1309 |
| 1.157 | 6.0 | 32880 | 1.0941 |
| 1.1268 | 7.0 | 38360 | 1.0696 |
| 1.098 | 8.0 | 43840 | 1.0466 |
| 1.0681 | 9.0 | 49320 | 1.0297 |
| 1.0356 | 10.0 | 54800 | 1.0168 |
| 1.0194 | 11.0 | 60280 | 1.0011 |
| 0.9941 | 12.0 | 65760 | 0.9843 |
| 0.981 | 13.0 | 71240 | 0.9716 |
| 0.9634 | 14.0 | 76720 | 0.9600 |
| 0.9511 | 15.0 | 82200 | 0.9546 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sohaib/open_llama_3b_v2-qlora | Sohaib | 2023-09-15T12:22:35Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T11:45:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
pszemraj/GPT-Neo-33M-simplewiki-2048-scratch | pszemraj | 2023-09-15T12:15:09Z | 121 | 1 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"en",
"dataset:pszemraj/simple_wikipedia_LM",
"base_model:roneneldan/TinyStories-33M",
"base_model:finetune:roneneldan/TinyStories-33M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T04:03:25Z | ---
base_model: roneneldan/TinyStories-33M
tags:
- generated_from_trainer
metrics:
- accuracy
inference:
parameters:
max_new_tokens: 64
do_sample: true
repetition_penalty: 1.1
no_repeat_ngram_size: 5
guidance_scale: 1.01
eta_cutoff: 0.001
widget:
- text: My name is El Microondas the Wise and
example_title: El Microondas
- text: A meme is
example_title: meme
- text: >-
Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
He chose her because she had
example_title: Coreference resolution
- text: >-
On a shelf, there are five books: a gray book, a red book, a purple book, a
blue book, and a black book
example_title: Logic puzzles
- text: >-
The two men running to become New York City's next mayor will face off in
their first debate Wednesday night
example_title: Reading comprehension
pipeline_tag: text-generation
datasets:
- pszemraj/simple_wikipedia_LM
license: apache-2.0
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT-Neo-33M-simplewiki-2048-scratch
Initialized from random weights based on config from [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M), 3 epochs bf16.
It achieves the following results on the evaluation set:
- Loss: 3.9511
- Accuracy: 0.3843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.4676 | 0.45 | 100 | 5.0139 | 0.2811 |
| 5.1729 | 0.89 | 200 | 4.6737 | 0.3050 |
| 4.8702 | 1.34 | 300 | 4.4922 | 0.3170 |
| 4.5538 | 1.79 | 400 | 4.3026 | 0.3348 |
| 4.4818 | 2.23 | 500 | 4.0908 | 0.3649 |
| 4.4583 | 2.68 | 600 | 3.9511 | 0.3843 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230907+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
Cartinoe5930/lima-2-7b-bnb | Cartinoe5930 | 2023-09-15T12:08:50Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T00:54:55Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
ronit33/distilbert-base-uncased-finetuned-emotion-dataset | ronit33 | 2023-09-15T11:54:10Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-06-26T06:42:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-dataset
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.918
- name: F1
type: f1
value: 0.9183451843024099
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2300
- Accuracy: 0.918
- F1: 0.9183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8386 | 1.0 | 250 | 0.3276 | 0.904 | 0.9011 |
| 0.2572 | 2.0 | 500 | 0.2300 | 0.918 | 0.9183 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
stefaniftime/dialoGPT-finetuned | stefaniftime | 2023-09-15T11:42:38Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:daily_dialog",
"base_model:stefaniftime/tmp93avx00w",
"base_model:finetune:stefaniftime/tmp93avx00w",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-12T10:48:47Z | ---
license: mit
base_model: stefaniftime/tmp93avx00w
tags:
- generated_from_trainer
datasets:
- daily_dialog
model-index:
- name: dialoGPT-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dialoGPT-finetuned
This model is a fine-tuned version of [stefaniftime/tmp93avx00w](https://huggingface.co/stefaniftime/tmp93avx00w) on the daily_dialog dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
eolang/SW-NER-v1 | eolang | 2023-09-15T11:39:15Z | 118 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"sw",
"dataset:masakhaner",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-04T19:10:12Z | ---
language:
- sw
license: apache-2.0
datasets:
- masakhaner
pipeline_tag: token-classification
examples: null
widget:
- text: Joe Bidden ni rais wa marekani.
example_title: Sentence 1
- text: Tumefanya mabadiliko muhimu katika sera zetu za faragha na vidakuzi.
example_title: Sentence 2
- text: Mtoto anaweza kupoteza muda kabisa.
example_title: Sentence 3
metrics:
- accuracy
---
# Swahili Named Entity Recognition
- **TUS-NER-sw** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance 😀**
- Finetuned from model: [eolang/SW-v1](https://huggingface.co/eolang/SW-v1)
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("eolang/SW-NER-v1")
model = AutoModelForTokenClassification.from_pretrained("eolang/SW-NER-v1")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Tumefanya mabadiliko muhimu katika sera zetu za faragha na vidakuzi"
ner_results = nlp(example)
print(ner_results)
```
## Training data
This model was fine-tuned on the Swahili Version of the [Masakhane Dataset](https://github.com/masakhane-io/masakhane-ner/tree/main/MasakhaNER2.0/data/swa) from the [MasakhaneNER Project](https://github.com/masakhane-io/masakhane-ner).
MasakhaNER is a collection of Named Entity Recognition (NER) datasets for 10 different African languages.
The languages forming this dataset are: Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Luo, Nigerian-Pidgin, Swahili, Wolof, and Yorùbá.
## Training procedure
This model was trained on a single NVIDIA RTX 3090 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805). |
Someman/bart-hindi | Someman | 2023-09-15T11:31:34Z | 138 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"hindi",
"summarization",
"seq2seq",
"dataset:Someman/hindi-summarization",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2023-06-01T01:17:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- hindi
- summarization
- seq2seq
datasets:
- Someman/hindi-summarization
base_model: facebook/bart-base
model-index:
- name: bart-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-hindi
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the [Someman/hindi-summarization](https://huggingface.co/datasets/Someman/hindi-summarization) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6568 | 0.14 | 500 | 0.6501 |
| 0.682 | 0.29 | 1000 | 0.5757 |
| 0.5331 | 0.43 | 1500 | 0.5530 |
| 0.5612 | 0.58 | 2000 | 0.5311 |
| 0.5685 | 0.72 | 2500 | 0.5043 |
| 0.4993 | 0.87 | 3000 | 0.4985 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 |
awari/outputs | awari | 2023-09-15T11:28:18Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
]
| null | 2023-09-15T11:26:53Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ZhihaiLLM/wisdomInterrogatory | ZhihaiLLM | 2023-09-15T11:28:13Z | 17 | 4 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-02T10:16:48Z | ---
language:
- en
- zh
license: other
tasks:
- text-generation
---
# 智海-录问
## 项目背景
智海-录问(wisdomInterrogatory)是由浙江大学、阿里巴巴达摩院以及华院计算三家单位共同设计研发的法律大模型。核心思想:以“普法共享和司法效能提升”为目标,从推动法律智能化体系入司法实践、数字化案例建设、虚拟法律咨询服务赋能等方面提供支持,形成数字化和智能化的司法基座能力。
## 模型训练
我们的模型基座是[Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B),在此基础上,进行了二次预训练以及指令微调训练。
### 二次预训练
二次预训练的目的是给通用的大模型注入法律领域的知识。预训练的数据包括法律文书、司法案例以及法律问答数据,共40G。
### 指令微调训练
经过了二次预训练之后,在指令微调阶段,我们使用了100k的指微调训练,其目的是让大模型具备问答的能力,能够直接与用户进行交流。
## 推理代码
#### 推理环境安装
```shell
transformers>=4.27.1
accelerate>=0.20.1
torch>=2.0.1
modelscope>=1.8.3
sentencepiece==0.1.99
```
#### 推理代码调用
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
from modelscope import AutoModelForCausalLM, AutoTokenizer, snapshot_download
import torch
model_id = "wisdomOcean/wisdomInterrogatory"
revision = 'v1.0.0'
model_dir = snapshot_download(model_id, revision)
def generate_response(prompt: str) -> str:
inputs = tokenizer(f'</s>Human:{prompt} </s>Assistant: ', return_tensors='pt')
inputs = inputs.to('cuda')
pred = model.generate(**inputs, max_new_tokens=800,
repetition_penalty=1.2)
response = tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)
return response.split("Assistant: ")[1]
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True)
prompt = "如果喝了两斤白酒后开车,会有什么后果?"
resp = generate_response(prompt)
print(resp)
```
## 免责声明
本模型仅供学术研究之目的而提供,不保证结果的准确性、完整性或适用性。在使用模型生成的内容时,您应自行判断其适用性,并自担风险。 |
s3nh/ajibawa-2023-Uncensored-Frank-33B-GGUF | s3nh | 2023-09-15T11:19:10Z | 0 | 1 | transformers | [
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T11:19:09Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/ajibawa-2023/Uncensored-Frank-33B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
Arrivedercis/llama-2-13b-minifinreport | Arrivedercis | 2023-09-15T11:10:10Z | 7 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"finance",
"dataset:JanosAudran/financial-reports-sec",
"dataset:Arrivedercis/finreport-llama2-5k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T10:47:31Z | ---
license: llama2
datasets:
- JanosAudran/financial-reports-sec
- Arrivedercis/finreport-llama2-5k
tags:
- finance
--- |
fnlp/SpeechGPT-7B-cm | fnlp | 2023-09-15T11:06:00Z | 123 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2305.11000",
"arxiv:2308.16692",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-14T13:43:16Z | # SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities
<a href='https://0nutation.github.io/SpeechGPT.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://arxiv.org/abs/2305.11000'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> [](https://huggingface.co/datasets/fnlp/SpeechInstruct)
<p align="center">
<img src="Pictures/logo.png" width="20%"> <br>
</p>
## Introduction
SpeechGPT is a large language model with **intrinsic cross-modal conversational abilities**, capable of perceiving and generating multi-model content following human instructions. With discrete speech representations, we first construct **SpeechInstruct**, a large-scale cross-modal speech instruction dataset. Additionally, we employ a three-stage training strategy that includes **modality-adaptation pre-training**, **cross-modal instruction fine-tuning**, and **chain-of-modality instruction fine-tuning**. The experimental results demonstrate that SpeechGPT has an impressive capacity to follow multi-modal human instructions and highlight the potential of handling multiple modalities with one model. <br>
SpeechGPT demos are shown in our [project page](https://0nutation.github.io/SpeechGPT.github.io/). As shown in the demos, SpeechGPT has strong cross-modal instruction-following ability and spoken dialogue ability. SpeechGPT can be **a talking encyclopedia, your personal assistant, your chat partner, a poet, a psychologist and your educational assistant**...
<br>
<br>
<p align="center">
<img src="Pictures/speechgpt-intro.png" width="95%"> <br>
SpeechGPT’s capabilities to tackle multiple cross-modal tasks
</p>
<br>
<br>
<p align="center">
<img src="Pictures/SpeechGPT-main.png" width="95%"> <br>
Left: SpeechInstruct construction process. Right: SpeechGPT model structure
</p>
## Release
- **[2023/9/15]** We released SpeechGPT code and checkpoints and SpeechInstruct dataset.
- **[2023/9/1]** We proposed **SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models**. We released the code and checkpoints of SpeechTokenizer. Checkout the [paper](https://arxiv.org/abs/2308.16692), [demo](https://0nutation.github.io/SpeechTokenizer.github.io/) and [github](https://github.com/ZhangXInFD/SpeechTokenizer).
- **[2023/5/18]** We released **SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities**. We propose SpeechGPT, the first multi-modal LLM capable of perceiving and generating multi-modal contents following multi-modal human instructions. Checkout the [paper](https://arxiv.org/abs/2305.11000) and [demo](https://0nutation.github.io/SpeechGPT.github.io/).
## Table of Contents
- [Open-source list](#open-source-list)
- [Talk with SpeechGPT](#talk-with-speechgpt)
- [Train SpeechGPT](#train-speechgpt)
- [Finetune SpeechGPT](#finetune-speechgpt)
## Open-source list
### Models
- [**SpeechGPT-7B-ma**](https://huggingface.co/fnlp/SpeechGPT-7B-ma): The model obtained after the first-stage modality-adaptation pre-training, which was initialized with LLaMA-7B and further pre-trained on LibriLight speech units.
- [**SpeechGPT-7B-cm**](https://huggingface.co/fnlp/SpeechGPT-7B-cm): The model obtained after the second-stage cross-modal instruction finetuning, which was initialized with SpeechGPT-7B-ma and further finetuned on SpeechInstruct Cross-Modal Instruction set. This is a powerful foundational model that aligns speech and text.
- [**SpeechGPT-7B-com**](https://huggingface.co/fnlp/SpeechGPT-7B-com): The model obtained after the third-stage chain-of-modality instruction finetuning, which was initialized with SpeechGPT-7B-cm and further lora-finetuned on SpeechInstruct Chain-of-Modality Instruction set. This is an adapter-model of SpeechGPT-7B-cm for spoken dialogue.
### Datasets
- [**SpeechInstruct-cross-modal**](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl): The cross-modal instruction set, about 9 million unit-text data pairs tokenzed by mHuBERT from large-scale English ASR datasets. data format:
- [**SpeechInstruct-chain-of-modality**](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/chain_of_modality_instruction.jsonl): The chain-of-thought style instructions for four input-output formats, namely Speech Instruction-Speech Response, Speech Instruction-Text Response, Text Instruction-Speech Response, and Text Instruction-Text Response.
SpeechInstruct-cross-modal data format:
```
[
{
"prefix": "You are an AI assistant whose name is SpeechGPT.\n- SpeechGPT is a intrinsic cross-modal conversational language model that is developed by Fudan University. SpeechGPT can understand and communicate fluently with human through speech or text chosen by the user.\n- It can perceive cross-modal inputs and generate cross-modal outputs.\n",
"plain_text": "[Human]: Try to speak out this sentence, please. This is input: The alchemist rode in front, with the falcon on his shoulder.<eoh> [SpeechGPT]: <sosp><661><588><604><157><596><499><596><106><596><189><63><189><665><991><162><202><393><946><327><905><907><597><660><351><557><794><788><59><754><12><977><877><333><873><835><67><940><118><686><613><169><72><644><553><535><935><101><741><384><173><894><787><380><787><196><555><721><944><250><56><812><222><915><143><390><479><330><435><647><246><650><816><325><506><686><208><613><417><755><193><411><452><111><735><6><735><63><665><644><991><535><271><333><196><918><29><202><393><946><734><390><479><330><776><167><761><907><597><660><351><557><794><75><788><15><366><896><627><168><654><659><177><183><609><710><187><493><361><470><821><59><56><198><912><742><840><431><531><76><668><576><803><791><380><660><325><801><549><366><377><164><309><584><605><193><71><39><eosp><eoa> "
},
]
```
SpeechInstruct-chain-of-modality data format:
```
[
{
"prefix": "You are an AI assistant whose name is SpeechGPT.\n- SpeechGPT is a intrinsic cross-modal conversational language model that is developed by Fudan University. SpeechGPT can understand and communicate fluently with human through speech or text chosen by the user.\n- It can perceive cross-modal inputs and generate cross-modal outputs.\n",
"plain_text": "[Human]: <sosp><661><987><511><732><951><997><111><982><189><63><665><991><535><101><741><173><945><944><503><641><124><565><734><870><290><978><833><238><761><907><430><901><185><403><557><244><583><788><663><969><896><627><143><515><663><969><660><691><251><412><260><41><740><677><253><380><382><268><506><876><417><755><16><819><80><651><80><651><80><987><588><eosp><eoh>. [SpeechGPT]: What is a bad term for poop?; [ta] A bad term for poop is excrement. It is usually used as a polite way to refer to fecal waste.; [ua] <sosp><497><63><264><644><710><823><565><577><154><331><384><173><945><29><244><326><583><728><576><663><969><896><627><143><38><515><663><24><382><251><676><412><260><41><740><677><253><382><268><876><233><878><609><389><771><865><641><124><878><609><423><384><879><487><219><522><589><337><126><119><663><748><12><671><877><377><385><902><819><619><842><419><997><829><111><666><42><277><63><665><644><389><771><685><437><641><124><258><436><139><340><11><59><518><56><948><86><258><436><139><340><347><376><940><118><944><878><173><641><124><362><734><179><961><931><878><609><423><384><879><219><522><866><337><243><935><101><741><822><89><194><630><86><555><105><79><868><220><156><824><998><870><390><422><330><776><663><969><523><105><79><799><220><357><390><479><422><330><776><485><165><86><501><119><716><205><521><787><935><101><741><89><194><664><835><67><940><118><613><417><755><902><415><772><497><eosp><eoa>."
},
]
```
## Talk with SpeechGPT
**Due to limited training data and resources, the performance of the open-source SpeechGPT is currently not optimal. Problems such as task recognition errors and inaccuracies in speech recognition may occur. As this project is primarily an exploration in research, we have not increased the amount of pretraining and sft data or training steps to enhance performance. Our hope is that SpeechGPT can serve as a foundational model to encourage research and exploration in the field of speech language models.**
### Installation
```bash
git clone https://github.com/0nutation/SpeechGPT
cd SpeechGPT
conda create --name SpeechGPT python=3.8
conda activate SpeechGPT
pip install -r requirements.txt
```
### Download
To talk with SpeechGPT, you should download [SpeechGPT-7B-cm](https://huggingface.co/fnlp/SpeechGPT-7B-cm) and [SpeechGPT-7B-com](https://huggingface.co/fnlp/SpeechGPT-7B-com) locally.
You should download mHuBERT model to ```utils/speech2unit/```. Please see [Speech2unit](https://github.com/0nutation/SpeechGPT/utils/speech2unit/README_DATA.md) for details.
```bash
s2u_dir="uitls/speech2unit"
cd ${s2u_dir}
wget https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_en_es_fr_it3.pt
wget https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_en_es_fr_it3_L11_km1000.bin
```
You should download the unit-vocoder to ```utils/vocoder/```. Please see [vocoder](https://github.com/0nutation/SpeechGPT/utils/vocoder/README_DATA.md) for details.
```bash
vocoder_dir="utils/vocoder/"
cd ${vocoder_dir}
wget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/config.json -O config.json
wget https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/mhubert_vp_en_es_fr_it3_400k_layer11_km1000_lj/g_00500000 -O vocoder.pt
```
### CLI Inference
```bash
python3 speechgpt/src/infer/cli_infer.py \
--model-name-or-path "path/to/SpeechGPT-7B-cm" \
--lora-weights "path/to/SpeechGPT-7B-com" \
--s2u-dir "${s2u_dir}" \
--vocoder-dir "${vocoder_dir} \
--output-dir "output"
```
**Notes**
For speech input, you can provide the path to the audio file. For ASR or TTS tasks, you must prefix the speech or text with ```this is input: ```, otherwise, it may be recognized incorrectly.
The speech response will be saved to a ```.wav``` file, and detailed responses will be saved in a JSON file. The paths to these files will be indicated in the response.
Here are some examples of talking with SpeechGPT:
**Textual dialogue example**
```
Please talk with SpeechGPT:
Who is Lebron James?
Response:
Lebron James is an American professional basketball player for the Los Angeles Lakers of the National Basketball Association (NBA). He is considered one of the greatest basketball players of all time and is known for his athleticism, scoring ability, and leadership skills. He is a four-time NBA MVP, a 14-time NBA All-Star, a 13-time All-NBA selection, and a two-time Olympic gold medalist.
Response json is saved in output/responses.json
```
**Spoken dialogue example**
```
Please talk with SpeechGPT:
prompts/0.wav
Transcript: What are the main causes of climate change?
Text response: The main causes of climate change are human activities such as burning fossil fuels, deforestation, and agricultural practices. These activities release greenhouse gases, like carbon dioxide and Methane, into the atmosphere which trap heat and cause the Earth's temperature to rise.
Speech repsonse is saved in output/wav/answer_0.wav
Response json is saved in output/responses.json
```
**ASR example**
```
Please talk with SpeechGPT:
Recognize this speech, this is input: prompts/1.wav
Response:
today is a sunny day.
Response json is saved in output/responses.json
```
**TTS example**
```
Please talk with SpeechGPT:
Read this sentence aloud, this is input: Today is a sunny day.
Response:
<sosp> <661> <987> <520> <982> <681> <982> <681> <982> <681> <982> <681> <982> <189> <63> <662> <79> <868> <220> <196> <166> <549> <822> <89> <194> <633> <14> <855> <183> <609> <389> <771> <865> <641> <124> <362> <734> <742> <98> <519> <26> <204> <280> <668> <167> <104> <650> <179> <961> <428> <950> <82> <165> <196> <166> <549> <822> <89> <194> <458> <726> <603> <819> <651> <133> <651> <133> <186> <133> <186> <133> <186> <511> <186> <511> <eosp>
Speech repsonse is saved in output/wav/answer_1.wav
Response json is saved in output/responses.json
```
### Gradio Web UI
```bash
python3 speechgpt/src/infer/web_infer.py \
--model-name-or-path "path/to/SpeechGPT-7B-cm" \
--lora-weights "path/to/SpeechGPT-7B-com" \
--s2u-dir "${s2u_dir}" \
--vocoder-dir "${vocoder_dir}" \
--output-dir "output/"
```
## Train SpeechGPT
### Stage1: Modality-adaptation Pre-training
First, utilize mHuBERT for discretizing the LibriLight dataset to obtain discrete unit sequences for stage1 training. You can refer to the data processing methods in [Speech2unit](https://github.com/0nutation/SpeechGPT/utils/speech2unit/README_DATA.md).
Second, divide the discrete units into a training set and a development set, and save them in the following format in the files ```data/stage1/train.txt``` and ```data/stage1/dev.txt```:
```
<sosp><189><247><922><991><821><258><485><974><284><466><969><523><196><202><881><331><822><853><432><32><742><98><519><26><204><280><576><384><879><901><555><944><366><641><124><362><734><156><824><462><761><907><430><81><597><716><205><521><470><821><677><355><483><641><124><243><290><978><82><620><915><470><821><576><384><466><398><212><455><931><579><969><778><45><914><445><469><576><803><6><803><791><377><506><835><67><940><613><417><755><237><224><452><121><736><eosp>
<sosp><300><189><63><6><665><991><881><331><6><384><879><945><29><244><583><874><655><837><81><627><545><124><337><850><412><213><260><41><740><797><211><488><961><428><6><196><555><944><873><32><683><700><955><812><328><915><166><250><56><903><86><233><479><330><776><167><104><764><259><921><366><663><432><431><531><976><314><822><89><664><377><611><479><417><eosp>
<sosp><189><735><991><39><565><734><32><742><98><519><26><204><280><668><576><803><791><660><555><233><787><101><741><466><969><219><107><459><491><556><384><733><219><501><445><137><910><523><793><50><981><230><534><321><948><86><116><281><62><462><104><70><918><743><15><212><455><143><836><173><944><958><390><422><66><776><258><436><139><663><432><742><98><519><589><243><126><260><41><444><6><655><764><969><219><727><85><297><700><362><493><6><493><361><393><946><6><470><821><246><655><837><81><969><916><584><819><544><452><158><452><736><eosp>
```
Third, you should download LLaMA 7B(HuggingFace) to ```llama/hf/7B```.
Now you can start stage1 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/ma_pretrain.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
### Stage 2: Cross-modal Instruction Finetuning
You should download [SpeechInstruct Cross-modal Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl) to ```data/stage2/```.
If you want to skip stage1 training, you can download ```SpeechGPT-7B-ma``` to ```output/stage1/```.
Now you can start stage2 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/cm_sft.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
### Stage 3: Chain-of-modality Instruction Finetuning
You should download [SpeechInstruct Chain-of-modality Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/chain_of_modality_instruction.jsonl) to ```data/stage3/```.
If you want to skip stage1 and stage2, you can download ```SpeechGPT-7B-cm``` to ```output/stage2/```.
Now you can start stage3 training:
To perform distributed training, you must specify the correct values for ```NNODE```, ```NODE_RANK```, ```MASTER_ADDR```, and ```MASTER_PORT```.
```bash
bash scripts/com_sft.sh ${NNODE} ${NODE_RANK} ${MASTER_ADDR} ${MASTER_PORT}
```
## Finetune SpeechGPT
```Speech-7B-cm``` is a foundational model with strong alignment between speech and text. We encourage fine-tuning SpeechGPT based on this model.
Step1: prepare your data following the format in [SpeechInstruct Cross-modal Instruction set](https://huggingface.co/datasets/fnlp/SpeechInstruct/resolve/main/cross_modal_instruction.jsonl).
Step2: download [SpeechGPT-7B-cm](https://huggingface.co/fnlp/SpeechGPT-7B-cm) locally.
Step3: Modify the ```METAROOT```, ```DATAROOT```, and ```OUTROOT``` parameters in the ```scripts/cm_sft.sh``` script to yours and then run it. For LoRA fine-tuning, update the ```METAROOT```, ```DATAROOT```, and ```OUTROOT``` parameters in the ```scripts/com_sft.sh``` script and run it.
## Acknowledgements
- [MOSS](https://github.com/OpenLMLab/MOSS): We use moss-sft-002-data.
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca):The codebase we built upon.
## Citation
If you find SpeechGPT useful for your research and applications, please cite using the BibTex:
```
@misc{zhang2023speechgpt,
title={SpeechGPT: Empowering Large Language Models with Intrinsic Cross-Modal Conversational Abilities},
author={Dong Zhang and Shimin Li and Xin Zhang and Jun Zhan and Pengyu Wang and Yaqian Zhou and Xipeng Qiu},
year={2023},
eprint={2305.11000},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
tlano/Tora-NijiFurry-LoRA | tlano | 2023-09-15T10:51:37Z | 0 | 7 | null | [
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-07-18T16:08:36Z | ---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
# 説明 / Description
チビケモ化LoRAです。<br>
学習データはNijiJourneyの出力画像のみです。<br>
<br>
効果は適用するモデルの「furry」タグ特性に依存します。<br>
<br>
未適用状態で「furry」タグの効果がほとんどない場合、<br>
あまり良い結果は得られないかもしれません。<br>
<br>
**Training Model:**<br>
 sdhk_v40.safetensors (https://civitai.com/models/82813/sdhk)<br>
**Trigger Words:**<br>
 furry<br>
<br>
**作者**<br>
 twitter: [@TlanoAI](https://twitter.com/TlanoAI)<br>
<br>
|
philschmid/ControlNet-endpoint | philschmid | 2023-09-15T10:38:51Z | 0 | 12 | null | [
"stable-diffusion",
"stable-diffusion-diffusers",
"controlnet",
"endpoints-template",
"arxiv:2302.05543",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| null | 2023-03-03T08:41:56Z | ---
license: openrail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- controlnet
- endpoints-template
thumbnail: "https://huggingface.co/philschmid/ControlNet-endpoint/resolve/main/thumbnail.png"
inference: true
---
# Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
> ControlNet is a neural network structure to control diffusion models by adding extra conditions.
> Official repository: https://github.com/lllyasviel/ControlNet
---
Blog post: [Controlled text to image generation with Inference Endpoints]()
This repository implements a custom `handler` task for `controlled text-to-image` generation on 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`

### expected Request payload
```json
{
"inputs": "A prompt used for image generation",
"negative_prompt": "low res, bad anatomy, worst quality, low quality",
"controlnet_type": "depth",
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
}
```
supported `controlnet_type` are: `canny_edge`, `pose`, `depth`, `scribble`, `segmentation`, `normal`, `hed`, `hough`
below is an example on how to run a request using Python and `requests`.
## Use Python to send requests
1. Get image
```
wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
```
2. Use the following code to send a request to the endpoint
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = "" # your endpoint url
HF_TOKEN = "" # your huggingface token `hf_xxx`
# helper image utils
def encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, negative_prompt=None, controlnet_type = "normal"):
image = encode_image(image)
# prepare sample payload
request = {"inputs": prompt, "image": image, "negative_prompt": negative_prompt, "controlnet_type": controlnet_type}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=request)
if response.status_code != 200:
print(response.text)
raise Exception("Prediction failed")
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt = "cloudy sky background lush landscape house and green trees, RAW photo (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
negative_prompt ="lowres, bad anatomy, worst quality, low quality, city, traffic",
controlnet_type = "hed",
image = "huggingface.png"
)
prediction.save("result.png")
```
```
expected output

[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract of the paper is the following:
We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications. |
miojizzy/mhr_recognize_classify_model_whole | miojizzy | 2023-09-15T10:27:25Z | 48 | 0 | transformers | [
"transformers",
"pytorch",
"code",
"image-classification",
"zh",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-12T07:05:13Z | ---
license: apache-2.0
language:
- zh
- en
pipeline_tag: image-classification
tags:
- code
---
|
IndianaUniversityDatasetsModels/MIMIC-Medical-Report-Generator | IndianaUniversityDatasetsModels | 2023-09-15T10:20:21Z | 118 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"medical",
"en",
"dataset:IndianaUniversityDatasetsModels/MIMIC-medical-report",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-04-09T23:19:25Z | ---
license: apache-2.0
language:
- en
metrics:
- rouge
library_name: transformers
tags:
- medical
datasets:
- IndianaUniversityDatasetsModels/MIMIC-medical-report
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Inputs and Outputs
- **Expected Input: " [INDICATION] + Text"
- **Target Output: " [findings] + Text [impression] + Text"
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vineetsharma/BioMedical_NER-maccrobat-bert | vineetsharma | 2023-09-15T09:55:20Z | 122 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:ktgiahieu/maccrobat2018_2020",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-04T09:59:55Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- ktgiahieu/maccrobat2018_2020
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BioMedical_NER-maccrobat-bert
results: []
widget:
- text: "CASE: A 28-year-old previously healthy man presented with a 6-week history of palpitations.
The symptoms occurred during rest, 2–3 times per week, lasted up to 30 minutes at a time and were associated with dyspnea.
Except for a grade 2/6 holosystolic tricuspid regurgitation murmur (best heard at the left sternal border with inspiratory accentuation), physical examination yielded unremarkable findings."
example_title: "example 1"
- text: "A 63-year-old woman with no known cardiac history presented with a sudden onset of dyspnea requiring intubation and ventilatory support out of hospital.
She denied preceding symptoms of chest discomfort, palpitations, syncope or infection.
The patient was afebrile and normotensive, with a sinus tachycardia of 140 beats/min."
example_title: "example 2"
- text: "A 48 year-old female presented with vaginal bleeding and abnormal Pap smears.
Upon diagnosis of invasive non-keratinizing SCC of the cervix, she underwent a radical hysterectomy with salpingo-oophorectomy which demonstrated positive spread to the pelvic lymph nodes and the parametrium.
Pathological examination revealed that the tumour also extensively involved the lower uterine segment."
example_title: "example 3"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioMedical_NER-maccrobat-bert
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on [maccrobat2018_2020](https://huggingface.co/datasets/ktgiahieu/maccrobat2018_2020) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3418
- Precision: 0.8668
- Recall: 0.9491
- F1: 0.9061
- Accuracy: 0.9501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 45 | 1.7363 | 0.4262 | 0.0055 | 0.0108 | 0.6274 |
| No log | 2.0 | 90 | 1.3805 | 0.3534 | 0.2073 | 0.2613 | 0.6565 |
| No log | 3.0 | 135 | 1.1713 | 0.4026 | 0.3673 | 0.3841 | 0.6908 |
| No log | 4.0 | 180 | 1.0551 | 0.4392 | 0.5309 | 0.4807 | 0.7149 |
| No log | 5.0 | 225 | 0.9591 | 0.4893 | 0.6012 | 0.5395 | 0.7496 |
| No log | 6.0 | 270 | 0.8656 | 0.5156 | 0.6483 | 0.5744 | 0.7722 |
| No log | 7.0 | 315 | 0.8613 | 0.5124 | 0.6871 | 0.5870 | 0.7716 |
| No log | 8.0 | 360 | 0.7524 | 0.5699 | 0.7114 | 0.6329 | 0.8110 |
| No log | 9.0 | 405 | 0.6966 | 0.5884 | 0.7374 | 0.6545 | 0.8265 |
| No log | 10.0 | 450 | 0.6564 | 0.6147 | 0.7678 | 0.6827 | 0.8373 |
| No log | 11.0 | 495 | 0.5950 | 0.6484 | 0.7826 | 0.7092 | 0.8563 |
| 0.9321 | 12.0 | 540 | 0.6083 | 0.6578 | 0.8001 | 0.7220 | 0.8587 |
| 0.9321 | 13.0 | 585 | 0.5821 | 0.6682 | 0.8206 | 0.7366 | 0.8688 |
| 0.9321 | 14.0 | 630 | 0.5578 | 0.6787 | 0.8324 | 0.7477 | 0.8744 |
| 0.9321 | 15.0 | 675 | 0.4819 | 0.7338 | 0.8484 | 0.7870 | 0.8974 |
| 0.9321 | 16.0 | 720 | 0.4775 | 0.7461 | 0.8573 | 0.7978 | 0.9020 |
| 0.9321 | 17.0 | 765 | 0.4786 | 0.7395 | 0.8600 | 0.7952 | 0.9020 |
| 0.9321 | 18.0 | 810 | 0.4481 | 0.7647 | 0.8740 | 0.8157 | 0.9102 |
| 0.9321 | 19.0 | 855 | 0.4597 | 0.7638 | 0.8799 | 0.8177 | 0.9108 |
| 0.9321 | 20.0 | 900 | 0.4551 | 0.7617 | 0.8835 | 0.8181 | 0.9096 |
| 0.9321 | 21.0 | 945 | 0.4365 | 0.7698 | 0.8873 | 0.8244 | 0.9142 |
| 0.9321 | 22.0 | 990 | 0.3993 | 0.7986 | 0.8957 | 0.8444 | 0.9247 |
| 0.2115 | 23.0 | 1035 | 0.4162 | 0.7950 | 0.9014 | 0.8449 | 0.9234 |
| 0.2115 | 24.0 | 1080 | 0.4188 | 0.8007 | 0.9042 | 0.8493 | 0.9248 |
| 0.2115 | 25.0 | 1125 | 0.3996 | 0.8105 | 0.9103 | 0.8575 | 0.9291 |
| 0.2115 | 26.0 | 1170 | 0.3775 | 0.8226 | 0.9134 | 0.8657 | 0.9333 |
| 0.2115 | 27.0 | 1215 | 0.3656 | 0.8297 | 0.9187 | 0.8720 | 0.9364 |
| 0.2115 | 28.0 | 1260 | 0.3744 | 0.8323 | 0.9217 | 0.8747 | 0.9371 |
| 0.2115 | 29.0 | 1305 | 0.3763 | 0.8296 | 0.9229 | 0.8738 | 0.9364 |
| 0.2115 | 30.0 | 1350 | 0.3506 | 0.8454 | 0.9272 | 0.8844 | 0.9414 |
| 0.2115 | 31.0 | 1395 | 0.3602 | 0.8441 | 0.9301 | 0.8850 | 0.9413 |
| 0.2115 | 32.0 | 1440 | 0.3617 | 0.8359 | 0.9303 | 0.8806 | 0.9400 |
| 0.2115 | 33.0 | 1485 | 0.3737 | 0.8352 | 0.9310 | 0.8805 | 0.9388 |
| 0.0818 | 34.0 | 1530 | 0.3541 | 0.8477 | 0.9352 | 0.8893 | 0.9438 |
| 0.0818 | 35.0 | 1575 | 0.3553 | 0.8487 | 0.9377 | 0.8910 | 0.9439 |
| 0.0818 | 36.0 | 1620 | 0.3583 | 0.8476 | 0.9367 | 0.8899 | 0.9438 |
| 0.0818 | 37.0 | 1665 | 0.3318 | 0.8642 | 0.9400 | 0.9005 | 0.9484 |
| 0.0818 | 38.0 | 1710 | 0.3449 | 0.8598 | 0.9409 | 0.8985 | 0.9471 |
| 0.0818 | 39.0 | 1755 | 0.3466 | 0.8591 | 0.9419 | 0.8986 | 0.9468 |
| 0.0818 | 40.0 | 1800 | 0.3494 | 0.8591 | 0.9426 | 0.8989 | 0.9473 |
| 0.0818 | 41.0 | 1845 | 0.3494 | 0.8591 | 0.9451 | 0.9001 | 0.9475 |
| 0.0818 | 42.0 | 1890 | 0.3545 | 0.8588 | 0.9462 | 0.9004 | 0.9477 |
| 0.0818 | 43.0 | 1935 | 0.3569 | 0.8599 | 0.9460 | 0.9009 | 0.9470 |
| 0.0818 | 44.0 | 1980 | 0.3465 | 0.8645 | 0.9468 | 0.9038 | 0.9492 |
| 0.0469 | 45.0 | 2025 | 0.3424 | 0.8663 | 0.9489 | 0.9057 | 0.9498 |
| 0.0469 | 46.0 | 2070 | 0.3460 | 0.8643 | 0.9481 | 0.9043 | 0.9490 |
| 0.0469 | 47.0 | 2115 | 0.3445 | 0.8658 | 0.9483 | 0.9052 | 0.9496 |
| 0.0469 | 48.0 | 2160 | 0.3387 | 0.8701 | 0.9500 | 0.9083 | 0.9508 |
| 0.0469 | 49.0 | 2205 | 0.3432 | 0.8671 | 0.9491 | 0.9063 | 0.9501 |
| 0.0469 | 50.0 | 2250 | 0.3418 | 0.8668 | 0.9491 | 0.9061 | 0.9501 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
leonard-pak/a2c-PandaReachDense-v3 | leonard-pak | 2023-09-15T09:49:52Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T09:35:16Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thienkieu611/mt5-translation | thienkieu611 | 2023-09-15T09:45:50Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-14T12:33:48Z | ---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt5-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-translation
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8131
- Bleu: 33.2736
- Gen Len: 15.9643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.3759 | 1.0 | 6352 | 0.9269 | 28.6704 | 16.0318 |
| 1.2615 | 2.0 | 12704 | 0.8545 | 31.4174 | 15.9769 |
| 1.2083 | 3.0 | 19056 | 0.8187 | 33.0994 | 15.9707 |
| 1.1886 | 4.0 | 25408 | 0.8071 | 33.5944 | 15.9657 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
FedeBerto/Griffith-Sentiment | FedeBerto | 2023-09-15T09:42:50Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-09-12T09:42:20Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | AdamW |
| weight_decay | 0.001 |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 3.3287092264799867e-06 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-08 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
CyberHarem/nanakusa_hazuki_theidolmstershinycolors | CyberHarem | 2023-09-15T09:41:43Z | 0 | 1 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/nanakusa_hazuki_theidolmstershinycolors",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T09:21:43Z | ---
license: mit
datasets:
- CyberHarem/nanakusa_hazuki_theidolmstershinycolors
pipeline_tag: text-to-image
tags:
- art
---
# Lora of nanakusa_hazuki_theidolmstershinycolors
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 2280, you need to download `2280/nanakusa_hazuki_theidolmstershinycolors.pt` as the embedding and `2280/nanakusa_hazuki_theidolmstershinycolors.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 2280**, with the score of 0.957. The trigger words are:
1. `nanakusa_hazuki_theidolmstershinycolors`
2. `green_hair, blush, green_eyes, bangs, breasts, folded_ponytail, smile`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 5700 | 0.913 | [Download](5700/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](5700/previews/pattern_4.png) |  |  | [<NSFW, click to see>](5700/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](5700/previews/pattern_11.png) |  |  | [<NSFW, click to see>](5700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5700/previews/nude.png) | [<NSFW, click to see>](5700/previews/nude2.png) |  |  |
| 5320 | 0.909 | [Download](5320/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](5320/previews/pattern_4.png) |  |  | [<NSFW, click to see>](5320/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](5320/previews/pattern_11.png) |  |  | [<NSFW, click to see>](5320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5320/previews/nude.png) | [<NSFW, click to see>](5320/previews/nude2.png) |  |  |
| 4940 | 0.927 | [Download](4940/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](4940/previews/pattern_4.png) |  |  | [<NSFW, click to see>](4940/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4940/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4940/previews/nude.png) | [<NSFW, click to see>](4940/previews/nude2.png) |  |  |
| 4560 | 0.919 | [Download](4560/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](4560/previews/pattern_4.png) |  |  | [<NSFW, click to see>](4560/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4560/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4560/previews/nude.png) | [<NSFW, click to see>](4560/previews/nude2.png) |  |  |
| 4180 | 0.902 | [Download](4180/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](4180/previews/pattern_4.png) |  |  | [<NSFW, click to see>](4180/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](4180/previews/pattern_11.png) |  |  | [<NSFW, click to see>](4180/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4180/previews/nude.png) | [<NSFW, click to see>](4180/previews/nude2.png) |  |  |
| 3800 | 0.950 | [Download](3800/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](3800/previews/pattern_4.png) |  |  | [<NSFW, click to see>](3800/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3800/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3800/previews/nude.png) | [<NSFW, click to see>](3800/previews/nude2.png) |  |  |
| 3420 | 0.944 | [Download](3420/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](3420/previews/pattern_4.png) |  |  | [<NSFW, click to see>](3420/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3420/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3420/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3420/previews/nude.png) | [<NSFW, click to see>](3420/previews/nude2.png) |  |  |
| 3040 | 0.884 | [Download](3040/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](3040/previews/pattern_4.png) |  |  | [<NSFW, click to see>](3040/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](3040/previews/pattern_11.png) |  |  | [<NSFW, click to see>](3040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3040/previews/nude.png) | [<NSFW, click to see>](3040/previews/nude2.png) |  |  |
| 2660 | 0.954 | [Download](2660/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](2660/previews/pattern_4.png) |  |  | [<NSFW, click to see>](2660/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2660/previews/pattern_11.png) |  |  | [<NSFW, click to see>](2660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2660/previews/nude.png) | [<NSFW, click to see>](2660/previews/nude2.png) |  |  |
| **2280** | **0.957** | [**Download**](2280/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](2280/previews/pattern_4.png) |  |  | [<NSFW, click to see>](2280/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](2280/previews/pattern_11.png) |  |  | [<NSFW, click to see>](2280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2280/previews/nude.png) | [<NSFW, click to see>](2280/previews/nude2.png) |  |  |
| 1900 | 0.962 | [Download](1900/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](1900/previews/pattern_4.png) |  |  | [<NSFW, click to see>](1900/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1900/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1900/previews/nude.png) | [<NSFW, click to see>](1900/previews/nude2.png) |  |  |
| 1520 | 0.927 | [Download](1520/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](1520/previews/pattern_4.png) |  |  | [<NSFW, click to see>](1520/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1520/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1520/previews/nude.png) | [<NSFW, click to see>](1520/previews/nude2.png) |  |  |
| 1140 | 0.863 | [Download](1140/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](1140/previews/pattern_4.png) |  |  | [<NSFW, click to see>](1140/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](1140/previews/pattern_11.png) |  |  | [<NSFW, click to see>](1140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1140/previews/nude.png) | [<NSFW, click to see>](1140/previews/nude2.png) |  |  |
| 760 | 0.807 | [Download](760/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](760/previews/pattern_4.png) |  |  | [<NSFW, click to see>](760/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](760/previews/pattern_11.png) |  |  | [<NSFW, click to see>](760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](760/previews/nude.png) | [<NSFW, click to see>](760/previews/nude2.png) |  |  |
| 380 | 0.820 | [Download](380/nanakusa_hazuki_theidolmstershinycolors.zip) |  |  |  | [<NSFW, click to see>](380/previews/pattern_4.png) |  |  | [<NSFW, click to see>](380/previews/pattern_7.png) |  |  |  | [<NSFW, click to see>](380/previews/pattern_11.png) |  |  | [<NSFW, click to see>](380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](380/previews/nude.png) | [<NSFW, click to see>](380/previews/nude2.png) |  |  |
|
cedpsam/cedpsam_EleutherAI_gpt-neo-125M-stablediffionprompts-stablediffionprompts | cedpsam | 2023-09-15T09:35:45Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:cedpsam/EleutherAI_gpt-neo-125M-stablediffionprompts",
"base_model:finetune:cedpsam/EleutherAI_gpt-neo-125M-stablediffionprompts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-10-05T07:26:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: cedpsam/EleutherAI_gpt-neo-125M-stablediffionprompts
model-index:
- name: cedpsam_EleutherAI_gpt-neo-125M-stablediffionprompts-stablediffionprompts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cedpsam_EleutherAI_gpt-neo-125M-stablediffionprompts-stablediffionprompts
This model is a fine-tuned version of [cedpsam/EleutherAI_gpt-neo-125M-stablediffionprompts](https://huggingface.co/cedpsam/EleutherAI_gpt-neo-125M-stablediffionprompts) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 15000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
dbecker1/sd-pokemon-model-lora-sdxl | dbecker1 | 2023-09-15T09:20:39Z | 1 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-15T08:21:26Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: lambdalabs/pokemon-blip-captions
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - dbecker1/sd-pokemon-model-lora-sdxl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
hattran/gpt2-vn-right-PROMPT_TUNING_CAUSAL_LM | hattran | 2023-09-15T09:20:01Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T09:19:57Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Wariano/bsc-bio-ehr-es-vih-rod | Wariano | 2023-09-15T09:12:09Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-09-14T11:10:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
model-index:
- name: bsc-bio-ehr-es-vih-rod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bsc-bio-ehr-es-vih-rod
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0021
- Precision: 1.0
- Sensitivity: 1.0
- F2: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Sensitivity | F2 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-----------:|:------:|
| 0.0886 | 1.0 | 21 | 0.0702 | 0.9524 | 1.0 | 0.9901 |
| 0.0856 | 2.0 | 42 | 0.0050 | 1.0 | 1.0 | 1.0 |
| 0.0493 | 3.0 | 63 | 0.0031 | 1.0 | 1.0 | 1.0 |
| 0.0315 | 4.0 | 84 | 0.0021 | 1.0 | 1.0 | 1.0 |
| 0.0288 | 5.0 | 105 | 0.0021 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LyaaaaaGames/GPT-J-6B-Skein | LyaaaaaGames | 2023-09-15T09:11:43Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-03T00:22:49Z | ---
license: mit
---
Sharded version of the original https://huggingface.co/KoboldAI/GPT-J-6B-Skein |
LyaaaaaGames/GPT-J-6B-Adventure | LyaaaaaGames | 2023-09-15T09:11:13Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-02T21:58:29Z | ---
license: mit
---
Sharded version of the original https://huggingface.co/KoboldAI/GPT-J-6B-Adventure |
InfAI/flan-t5-text2sparql-naive | InfAI | 2023-09-15T09:01:37Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:lc_quad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-09-11T09:21:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- lc_quad
model-index:
- name: flan-t5-text2sparql-naive
results: []
---
# flan-t5-text2sparql-naive
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the lc_quad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4105
## Model description
T5 has performed well in generating SPARQL queries from natural text, but semi automated preprocessing was necessary ([Banerjee et.al.](https://dl.acm.org/doi/abs/10.1145/3477495.3531841)).
FLAN-T5 comes with the promise of being better than T5 across all categories, so a re-evaluation is needed. Our goal is to find
out what kind of preprocessing is still necessary to retain good performance, as well as how to automate it fully.
This is the naive version of the fine-tuned LLM, blindly applying the same tokenizer both on the natural language question as well as the target SPARQL query.
## Intended uses & limitations
This model performs very bad, do not use it! We wanted to find out whether preprocessing is still necessary or T5 can figure things out on its own. As it turns out, preprocessing
is still needed, so this model will just serve as some kind of baseline.
An example:
```
Create SPARQL Query: What was the population of Clermont-Ferrand on 1-1-2013?
```
```
'SELECT ?obj WHERE wd:Q2'}
```
## Training and evaluation data
LC_QUAD 2.0, see sidebar.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 301 | 0.5173 |
| 0.6515 | 2.0 | 602 | 0.4861 |
| 0.6515 | 3.0 | 903 | 0.4639 |
| 0.4954 | 4.0 | 1204 | 0.4478 |
| 0.4627 | 5.0 | 1505 | 0.4340 |
| 0.4627 | 6.0 | 1806 | 0.4247 |
| 0.4404 | 7.0 | 2107 | 0.4177 |
| 0.4404 | 8.0 | 2408 | 0.4139 |
| 0.429 | 9.0 | 2709 | 0.4115 |
| 0.4201 | 10.0 | 3010 | 0.4105 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
0xk1h0/codegen25-7B-peft-qlora | 0xk1h0 | 2023-09-15T09:00:49Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T08:42:43Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
feverishhh/Llava-Visionary-70B | feverishhh | 2023-09-15T08:49:29Z | 0 | 2 | null | [
"license:wtfpl",
"region:us"
]
| null | 2023-09-15T07:51:37Z | ---
license: wtfpl
---
# The Big Picture ([Brainproject.ai](http://brainproject.ai/))
The human brain is an intricate puzzle that we're continually striving to decode. The aim is to replicate its complexity, functionality, and depth in a digital realm - exploring the convergence of neuroscience and artificial intelligence to glean insights into the mind's intricate workings and harness that knowledge into digital counterparts.
# Mixture of Experts
Llava-Visionary-70B utilizes a Mixture of Experts (MoE) architecture, with different expert modules specializing in various aspects of visual and language understanding. A gating mechanism selectively activates the most relevant experts for each input. This provides computational efficiency and scalability.
# Llava-Visionary-70B
<!-- Provide a summary of what the model is/does. -->
Llava-Visionary-70B is an artificial intelligence system designed for visual reasoning and multimodal understanding. It builds on top of the Llama-2 architecture using a Mixture of Experts approach.
The model has been further pretrained on a large dataset of YouTube videos and images to develop human-like visual comprehension abilities. This enables it to understand the semantics of images, videos, and multimodal content.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Priyanshu Pareek
- **Model type:** Transformer-based multimodal model
- **License:** wtfpl
- **Finetuned from model [optional]:** [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Llava-Visionary-70B is designed for tasks that involve:
- Visual understanding of images, videos, diagrams
- Multimodal reasoning with vision and language
- Answering questions about visual content
- Generating captions or descriptions of visual data
#### It can provide value for uses cases such as:
- Multimodal chatbots and digital assistants
- Image and video search/recommendation
- Automated alt-text generation
- Vision-based QA systems
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Llava-Visionary-70B can be used out-of-the-box without further training for zero-shot inference on downstream visual/multimodal tasks.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
## How to Get Started with the Model
Want to take Chameleon-Llama-70B for a spin?
Load the model and tokenizer from HuggingFace:
```
from transformers import LlavaVisionary70BModel, LlavaVisionary70BTokenizer
tokenizer = LlavaVisionary70BTokenizer.from_pretrained("llava-visionary-70b")
model = LlavaVisionary70BModel.from_pretrained("llava-visionary-70b")```
```
Pass multimodal input and generate output:
```
text = "What type of animal is shown in this picture?"
image = Image.open("animal.jpg")
inputs = tokenizer(text, images=image, return_tensors="pt")
outputs = model(**inputs)
```
## Training Details
### Training Data
Llava-Visionary-70B was further pretrained on a large dataset of YouTube videos and images.
### Training Procedure
The model was trained using supervised pretraining on video-text pairs, leveraging the original Llama-2 model weights.
#### Training Hyperparameters
- Batch size: 256
- Learning rate: 5e-5
- Optimizer: AdamW
- Training epochs: 3 |
CyberHarem/nanakusa_nichika_theidolmstershinycolors | CyberHarem | 2023-09-15T08:44:03Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/nanakusa_nichika_theidolmstershinycolors",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T08:21:35Z | ---
license: mit
datasets:
- CyberHarem/nanakusa_nichika_theidolmstershinycolors
pipeline_tag: text-to-image
tags:
- art
---
# Lora of nanakusa_nichika_theidolmstershinycolors
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7280, you need to download `7280/nanakusa_nichika_theidolmstershinycolors.pt` as the embedding and `7280/nanakusa_nichika_theidolmstershinycolors.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7280**, with the score of 0.921. The trigger words are:
1. `nanakusa_nichika_theidolmstershinycolors`
2. `green_hair, blush, green_eyes, short_hair, bangs`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 7800 | 0.911 | [Download](7800/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/pattern_12.png) |  |  | [<NSFW, click to see>](7800/previews/pattern_15.png) |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| **7280** | **0.921** | [**Download**](7280/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/pattern_12.png) |  |  | [<NSFW, click to see>](7280/previews/pattern_15.png) |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.880 | [Download](6760/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/pattern_12.png) |  |  | [<NSFW, click to see>](6760/previews/pattern_15.png) |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.897 | [Download](6240/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/pattern_12.png) |  |  | [<NSFW, click to see>](6240/previews/pattern_15.png) |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.905 | [Download](5720/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/pattern_12.png) |  |  | [<NSFW, click to see>](5720/previews/pattern_15.png) |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.910 | [Download](5200/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/pattern_12.png) |  |  | [<NSFW, click to see>](5200/previews/pattern_15.png) |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.873 | [Download](4680/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/pattern_12.png) |  |  | [<NSFW, click to see>](4680/previews/pattern_15.png) |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.917 | [Download](4160/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/pattern_12.png) |  |  | [<NSFW, click to see>](4160/previews/pattern_15.png) |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.881 | [Download](3640/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/pattern_12.png) |  |  | [<NSFW, click to see>](3640/previews/pattern_15.png) |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.920 | [Download](3120/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/pattern_12.png) |  |  | [<NSFW, click to see>](3120/previews/pattern_15.png) |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.947 | [Download](2600/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/pattern_12.png) |  |  | [<NSFW, click to see>](2600/previews/pattern_15.png) |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.907 | [Download](2080/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/pattern_12.png) |  |  | [<NSFW, click to see>](2080/previews/pattern_15.png) |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.885 | [Download](1560/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/pattern_12.png) |  |  | [<NSFW, click to see>](1560/previews/pattern_15.png) |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.899 | [Download](1040/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/pattern_12.png) |  |  | [<NSFW, click to see>](1040/previews/pattern_15.png) |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.928 | [Download](520/nanakusa_nichika_theidolmstershinycolors.zip) |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/pattern_12.png) |  |  | [<NSFW, click to see>](520/previews/pattern_15.png) |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
qayqaq/bert-finetuned-ner | qayqaq | 2023-09-15T08:43:50Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-15T08:38:45Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Precision: 0.9362
- Recall: 0.9527
- F1: 0.9444
- Accuracy: 0.9869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0777 | 1.0 | 1756 | 0.0723 | 0.9138 | 0.9334 | 0.9235 | 0.9807 |
| 0.0395 | 2.0 | 3512 | 0.0536 | 0.9303 | 0.9502 | 0.9401 | 0.9863 |
| 0.023 | 3.0 | 5268 | 0.0591 | 0.9362 | 0.9527 | 0.9444 | 0.9869 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
s3nh/sauce1337-BerrySauce-L2-13b-GGUF | s3nh | 2023-09-15T08:39:37Z | 1 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T07:33:31Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/sauce1337/BerrySauce-L2-13b).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
feverishhh/Chameleon-Llama-70B | feverishhh | 2023-09-15T08:35:25Z | 0 | 2 | null | [
"license:wtfpl",
"region:us"
]
| null | 2023-09-15T07:53:10Z | ---
license: wtfpl
---
# The Big Picture ([Brainproject.ai](http://brainproject.ai/))
The human brain is an intricate puzzle that we're continually striving to decode. My aim is to replicate its complexity, functionality, and depth in a digital realm. In other words, we're exploring the convergence of neuroscience and artificial intelligence to glean insights into the mind's intricate workings and harness that knowledge into digital counterparts.
# Mixture of Experts
Chameleon-Llama-70B doesn't work alone. It's part of the Mixture of Experts framework. Within this structure, various models, each with their distinct competencies, collaborate. This synergy allows for a richer, more holistic approach to understanding and replicating brain functions.
# Chameleon-Llama-70B
<!-- Provide a quick summary of what the model is/does. -->
Chameleon enhances Llama-70B with a natural language planner module that dynamically composes reasoning chains from various tools:
- Module Inventory: Vision models, knowledge modules, web search, Python functions, etc.
- Natural Language Planner: Generates programs indicating a sequence of modules to execute.
- Tool Execution: Selected modules process inputs sequentially, caching context.
- Adaptability: Planner synthesizes custom programs for diverse tasks.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Priyanshu Pareek
- **Model type:** Fine-tuned LLama with [Chamelion](https://chameleon-llm.github.io/)
- **License:** wtfpl
- **Finetuned from model [optional]:** [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is primed for out-of-the-box applications without the need for fine-tuning or integration into bigger systems.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
It's essential to approach the Chameleon-Llama-70B (and models like it) with an informed perspective. Recognize that while it holds immense potential, there are inherent risks, biases, and limitations. More data and insights are necessary to offer detailed recommendations.
## How to Get Started with the Model
Want to take Chameleon-Llama-70B for a spin?
```
from transformers import ChameleonLlamaModel, ChameleonLlamaTokenizer
tokenizer = ChameleonLlamaTokenizer.from_pretrained("path-to-Chameleon-Llama-70B")
model = ChameleonLlamaModel.from_pretrained("path-to-Chameleon-Llama-70B")
input_text = "Your text here"
encoded_input = tokenizer(input_text, return_tensors='pt')
output = model(**encoded_input)
```
Replace "path-to-Chameleon-Llama-70B" with the correct path or URL for the pre-trained model.
## Training Details
### Training Data
The model was trained on a combination of the original Llama datasets, integrated with data from various real-time sources like news outlets, web pages, and other real-time data feeds.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
Data from real-time sources were preprocessed to ensure a uniform format and to filter out any irrelevant or sensitive information.
#### Training Hyperparameters
- Training regime: fp16 mixed precision
- Batch size: 64
- Learning rate: 3e-4
- Optimizer: AdamW
- Training epochs: 4 |
ahyar002/audio_classification | ahyar002 | 2023-09-15T08:31:43Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-09-15T08:30:38Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.05309734513274336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6700
- Accuracy: 0.0531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 15 | 2.6552 | 0.0619 |
| No log | 2.0 | 30 | 2.6700 | 0.0531 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
NewstaR/GPTagalog | NewstaR | 2023-09-15T08:26:11Z | 0 | 0 | null | [
"langauge",
"gpt",
"remake",
"v2",
"pytorch",
"pickle",
"gpt2",
"open sourced",
"text-generation",
"tl",
"license:openrail",
"region:us"
]
| text-generation | 2023-09-13T17:50:54Z | ---
license: openrail
language:
- tl
tags:
- langauge
- gpt
- remake
- v2
- pytorch
- pickle
- gpt2
- open sourced
pipeline_tag: text-generation
---
Colab used to train this model 👉👉 [gpt remaker](https://colab.research.google.com/drive/1O9uFQVP9EUhguwhx2qD4pk9PbRCdnijE?usp=sharing)
Both training and inferencing are included in the colab. Happy coding!
# Model Information
- Model Name: GPTagalog
- Version: 2
- Training Iterations: 143,000
- Learning Rate: 6e-4
- Language: Tagalog
- Compatibility: Pickle (pkl) format (cuda)
- Model Size: 30MB
- Training Time: Approx 2 hours and 30 minutes
- Usage: Experimental, not suitable for commercial purposes
# Model Description
This was designed to explore the capabilities of training a language model on a small dataset and to see how it performs in generating text in the Tagalog language.
# Training Details
Iterations and Epochs: GPTagalog was trained for 143,000 iterations over 60 epochs. This extended training period aimed to refine its language generation abilities.
Learning Rate: The model was trained with a learning rate of 6e-4, which was chosen to optimize learning and convergence.
Model Size: GPTagalog is relatively small with a file size of 30MB. This small size is due to its experimental nature and limited resources.
# Usage Guidelines
Experimental Use: GPTagalog Version 2 is an experimental model and is not recommended for commercial purposes. It may have limitations in generating coherent and contextually accurate text.
Resource Constraints: Due to resource limitations, the model's training was limited to 143,000 iterations and a maximum training time of 6 hours. This is considerably shorter than the training duration of larger models like GPT-2, which has 143 million parameters and takes several days to train. |
Charlenecuteeee/my_awesome_wnut_model | Charlenecuteeee | 2023-09-15T08:25:37Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-15T07:10:04Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Charlenecuteeee/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Charlenecuteeee/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1345
- Validation Loss: 0.2737
- Train Precision: 0.4577
- Train Recall: 0.3756
- Train F1: 0.4126
- Train Accuracy: 0.9416
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3414 | 0.3297 | 0.3182 | 0.0502 | 0.0868 | 0.9247 | 0 |
| 0.1711 | 0.2952 | 0.5020 | 0.3062 | 0.3804 | 0.9392 | 1 |
| 0.1345 | 0.2737 | 0.4577 | 0.3756 | 0.4126 | 0.9416 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nineninecorn/koalpaca-polyglot-12.8b-bill | nineninecorn | 2023-09-15T07:58:15Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T07:58:13Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Souvik123/results | Souvik123 | 2023-09-15T07:48:15Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased-distilled-squad",
"base_model:finetune:distilbert/distilbert-base-uncased-distilled-squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2023-09-15T07:48:00Z | ---
license: apache-2.0
base_model: distilbert-base-uncased-distilled-squad
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 1
- eval_batch_size: 11
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.0541 | 1.0 | 4000 | 5.9506 |
| 6.0123 | 2.0 | 8000 | 5.9506 |
| 6.0064 | 3.0 | 12000 | 5.9506 |
| 5.9816 | 4.0 | 16000 | 5.9506 |
| 5.9749 | 5.0 | 20000 | 5.9506 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
hoangphu7122002ai/lora_exp | hoangphu7122002ai | 2023-09-15T07:48:02Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T07:17:02Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
Geotrend/bert-base-15lang-cased | Geotrend | 2023-09-15T07:47:55Z | 973 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"ar",
"ru",
"vi",
"el",
"bg",
"th",
"tr",
"hi",
"ur",
"sw",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
language:
- multilingual
- en
- fr
- es
- de
- zh
- ar
- ru
- vi
- el
- bg
- th
- tr
- hi
- ur
- sw
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
- text: "تقع سويسرا في [MASK] أوروبا"
- text: "إسمي محمد وأسكن في [MASK]."
---
# bert-base-15lang-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
The measurements below have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type):
| Model | Num parameters | Size | Memory | Loading time |
| ------------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| Geotrend/bert-base-15lang-cased | 141 million | 564 MB | 1098 MB | 3.1 sec |
Handled languages: en, fr, es, de, zh, ar, ru, vi, el, bg, th, tr, hi, ur and sw.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-15lang-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-15lang-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
CyberHarem/koshiba_mai_watashinoyuriwaoshigotodesu | CyberHarem | 2023-09-15T07:46:54Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/koshiba_mai_watashinoyuriwaoshigotodesu",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T07:28:20Z | ---
license: mit
datasets:
- CyberHarem/koshiba_mai_watashinoyuriwaoshigotodesu
pipeline_tag: text-to-image
tags:
- art
---
# Lora of koshiba_mai_watashinoyuriwaoshigotodesu
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7800, you need to download `7800/koshiba_mai_watashinoyuriwaoshigotodesu.pt` as the embedding and `7800/koshiba_mai_watashinoyuriwaoshigotodesu.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7800**, with the score of 0.993. The trigger words are:
1. `koshiba_mai_watashinoyuriwaoshigotodesu`
2. `pink_hair, short_hair, hairband, ribbon, hair_ribbon, smile, bangs, black_ribbon, blush, open_mouth, brown_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| **7800** | **0.993** | [**Download**](7800/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7800/previews/nude.png) | [<NSFW, click to see>](7800/previews/nude2.png) |  |  |
| 7280 | 0.943 | [Download](7280/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7280/previews/nude.png) | [<NSFW, click to see>](7280/previews/nude2.png) |  |  |
| 6760 | 0.991 | [Download](6760/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6760/previews/nude.png) | [<NSFW, click to see>](6760/previews/nude2.png) |  |  |
| 6240 | 0.990 | [Download](6240/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6240/previews/nude.png) | [<NSFW, click to see>](6240/previews/nude2.png) |  |  |
| 5720 | 0.990 | [Download](5720/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) |  |  |
| 5200 | 0.991 | [Download](5200/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5200/previews/nude.png) | [<NSFW, click to see>](5200/previews/nude2.png) |  |  |
| 4680 | 0.991 | [Download](4680/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4680/previews/nude.png) | [<NSFW, click to see>](4680/previews/nude2.png) |  |  |
| 4160 | 0.989 | [Download](4160/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4160/previews/nude.png) | [<NSFW, click to see>](4160/previews/nude2.png) |  |  |
| 3640 | 0.990 | [Download](3640/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3640/previews/nude.png) | [<NSFW, click to see>](3640/previews/nude2.png) |  |  |
| 3120 | 0.990 | [Download](3120/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3120/previews/nude.png) | [<NSFW, click to see>](3120/previews/nude2.png) |  |  |
| 2600 | 0.989 | [Download](2600/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2600/previews/nude.png) | [<NSFW, click to see>](2600/previews/nude2.png) |  |  |
| 2080 | 0.989 | [Download](2080/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2080/previews/nude.png) | [<NSFW, click to see>](2080/previews/nude2.png) |  |  |
| 1560 | 0.986 | [Download](1560/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1560/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1560/previews/nude.png) | [<NSFW, click to see>](1560/previews/nude2.png) |  |  |
| 1040 | 0.942 | [Download](1040/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1040/previews/nude.png) | [<NSFW, click to see>](1040/previews/nude2.png) |  |  |
| 520 | 0.981 | [Download](520/koshiba_mai_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](520/previews/nude.png) | [<NSFW, click to see>](520/previews/nude2.png) |  |  |
|
RtwC/bert-finetuned-ner | RtwC | 2023-09-15T07:46:38Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-15T07:35:27Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.933597621407334
- name: Recall
type: recall
value: 0.9511948838774823
- name: F1
type: f1
value: 0.9423141047015672
- name: Accuracy
type: accuracy
value: 0.9861217401542356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9336
- Recall: 0.9512
- F1: 0.9423
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0778 | 1.0 | 1756 | 0.0774 | 0.9014 | 0.9305 | 0.9157 | 0.9790 |
| 0.0405 | 2.0 | 3512 | 0.0561 | 0.9286 | 0.9498 | 0.9391 | 0.9858 |
| 0.0245 | 3.0 | 5268 | 0.0592 | 0.9336 | 0.9512 | 0.9423 | 0.9861 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
om-ashish-soni/pos-morph-analysis-hn-pud | om-ashish-soni | 2023-09-15T07:41:43Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:universal_dependencies",
"base_model:om-ashish-soni/pos-morph-analysis-hn-pud",
"base_model:finetune:om-ashish-soni/pos-morph-analysis-hn-pud",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-09-15T06:55:06Z | ---
license: apache-2.0
base_model: om-ashish-soni/pos-morph-analysis-hn-pud
tags:
- generated_from_trainer
datasets:
- universal_dependencies
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pos-morph-analysis-hn-pud
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: universal_dependencies
type: universal_dependencies
config: hi_pud
split: test
args: hi_pud
metrics:
- name: Precision
type: precision
value: 0.85833831440526
- name: Recall
type: recall
value: 0.866888016903109
- name: F1
type: f1
value: 0.8625919807778947
- name: Accuracy
type: accuracy
value: 0.8504172571248623
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pos-morph-analysis-hn-pud
This model is a fine-tuned version of [om-ashish-soni/pos-morph-analysis-hn-pud](https://huggingface.co/om-ashish-soni/pos-morph-analysis-hn-pud) on the universal_dependencies dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8746
- Precision: 0.8583
- Recall: 0.8669
- F1: 0.8626
- Accuracy: 0.8504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 88 | 0.8738 | 0.8625 | 0.8669 | 0.8647 | 0.8539 |
| No log | 2.0 | 176 | 0.9034 | 0.8506 | 0.8578 | 0.8542 | 0.8424 |
| No log | 3.0 | 264 | 0.9011 | 0.8608 | 0.8678 | 0.8643 | 0.8548 |
| No log | 4.0 | 352 | 0.8746 | 0.8583 | 0.8669 | 0.8626 | 0.8504 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
FinchResearch/Gurkha-copilot-1b | FinchResearch | 2023-09-15T07:40:10Z | 154 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_bigcode",
"text-generation",
"code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-29T07:09:03Z | ---
license: apache-2.0
tags:
- code
---
# Model Card: Gurkha Coding Assistant
## Overview
**Name:** Gurkha Coding Assistant
**Model Type:** Text Generation
**Model Size:** 1 billion parameters
**Functionality:** Code generation, code completion, text generation
## Description
Gurkha is a versatile coding assistant designed to excel in generating and completing code tasks. With a robust architecture comprising 1 billion parameters, Gurkha offers efficiency and proficiency in various coding scenarios. Whether you need code snippets, code completions, or general text generation, Gurkha is here to provide reliable assistance.
## Use Cases
- Efficient code generation for multiple programming languages
- Accelerated code completion to streamline development workflows
- Automated generation of documentation and comments
- Creation of illustrative code samples and examples
- Exploring coding concepts through interactive code generation
## Strengths
- Strong parameter base for robust performance
- Versatility in addressing diverse coding challenges
- Proficient generation of high-quality code
- Adaptability to different coding styles and languages
## Limitations
- Focus is primarily on code-related tasks
- Complexity might require specific instructions for precise code generation
- Limited contextual understanding outside of coding domain
## Ethical Considerations
Gurkha's code generation adheres to ethical guidelines, producing content that is unbiased, respectful, and non-discriminatory. Users are encouraged to review generated code to ensure alignment with industry standards and coding best practices.
|
pepe4235/second_try | pepe4235 | 2023-09-15T07:32:13Z | 0 | 0 | peft | [
"peft",
"pytorch",
"llama",
"region:us"
]
| null | 2023-09-12T19:29:13Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
progerjkd/dogbooth | progerjkd | 2023-09-15T07:27:00Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-14T02:27:01Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - progerjkd/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
YanaS/llama2-bg-GGUF | YanaS | 2023-09-15T07:22:54Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"text-generation",
"bg",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-09-15T06:36:19Z | ---
language:
- bg
license: mit
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
---
**Description**
GGUF Format model files for [this project](https://huggingface.co/bogdan1/llama2-bg).
From [@bogdan1](https://huggingface.co/bogdan1): Llama-2-7b-base fine-tuned on the Chitanka dataset and a dataset made of scraped news comments
dating mostly from 2022/2023. Big Thank you :)
**About GGUF**
**Introduction:**
GGUF was introduced by the llama.cpp team on August 21st, 2023, as a replacement for GGML, which is no longer supported.
GGUF is a successor file format to GGML, GGMF, and GGJT. It is designed to provide a comprehensive solution for model loading,
ensuring unambiguous data representation while offering extensibility to accommodate future enhancements. GGUF eliminates the need for disruptive changes,
introduces support for various non-llama models such as falcon, rwkv, and bloom, and simplifies configuration settings by automating prompt format adjustments.
**Key Features:**
1. **No More Breaking Changes:** GGUF is engineered to prevent compatibility issues with older models, ensuring a seamless transition from previous file formats
like GGML, GGMF, and GGJT.
3. **Support for Non-Llama Models:** GGUF extends its compatibility to a wide range of models beyond llamas, including falcon, rwkv, bloom, and more.
4. **Streamlined Configuration:** Say goodbye to complex settings like rope-freq-base, rope-freq-scale, gqa, and rms-norm-eps. GGUF simplifies the
configuration process, making it more user-friendly.
6. **Automatic Prompt Format:** GGUF introduces the ability to automatically set prompt formats, reducing the need for manual adjustments.
7. **Extensibility:** GGUF is designed to accommodate future updates and enhancements, ensuring long-term compatibility and adaptability.
8. **Enhanced Tokenization:** GGUF features improved tokenization code, including support for special tokens, which enhances overall performance,
especially for models using new special tokens and custom prompt templates.
**Supported Clients and Libraries:**
GGUF is supported by a variety of clients and libraries, making it accessible and versatile for different use cases:
1. [**llama.cpp**](https://github.com/ggerganov/llama.cpp).
2. [**text-generation-webui**](https://github.com/oobabooga/text-generation-webui)
3. [**KoboldCpp**](https://github.com/LostRuins/koboldcpp)
4. [**LM Studio**](https://lmstudio.ai/)
5. [**LoLLMS Web UI**](https://github.com/ParisNeo/lollms-webui)
6. [**ctransformers**](https://github.com/marella/ctransformers)
7. [**llama-cpp-python**](https://github.com/abetlen/llama-cpp-python)
8. [**candle**](https://github.com/huggingface/candle) |
stablediffusionapi/revanimatedvdg | stablediffusionapi | 2023-09-15T07:19:24Z | 29 | 0 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-09-15T07:18:06Z | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# RevAnimated_vdg API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "revanimatedvdg"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/revanimatedvdg)
Model link: [View model](https://stablediffusionapi.com/models/revanimatedvdg)
Credits: [View credits](https://civitai.com/?query=RevAnimated_vdg)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "revanimatedvdg",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Billykiiiiim/Realistic_local_repaint | Billykiiiiim | 2023-09-15T07:14:19Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-09-15T02:49:13Z | ---
license: other
---
这是一款stable dffusion亚洲风格的局部重绘专用模型,主要针对亚洲女性真人人体行进局部重绘。
This is a stable diffusion Asian style partial redrawing specialized model, mainly for partial redrawing of Asian female real human bodies.
它基本可以满足你对人体局部重绘的所有要求。
It can basically meet all your needs for partial redrawing of human bodies.
请在合理合法的场景下,妥善的使用该模型。
Please use this model properly under reasonable and legal scenarios. |
hattran/gpt2-vn-clf_PROMPT_TUNING_CAUSAL_LM | hattran | 2023-09-15T07:14:05Z | 1 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T05:32:51Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
vineetsharma/speecht5_tts-finetuned-voxpopuli-sk-v2 | vineetsharma | 2023-09-15T06:58:55Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
]
| text-to-speech | 2023-07-07T05:43:16Z | ---
license: mit
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
base_model: microsoft/speecht5_tts
model-index:
- name: speecht5_tts-finetuned-voxpopuli-sk-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts-finetuned-voxpopuli-sk-v2
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4999 | 5.24 | 1000 | 0.4523 |
| 0.4763 | 10.47 | 2000 | 0.4408 |
| 0.4676 | 15.71 | 3000 | 0.4366 |
| 0.4665 | 20.94 | 4000 | 0.4354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vineetsharma/whisper-small-dv | vineetsharma | 2023-09-15T06:57:45Z | 78 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-07-03T10:19:52Z | ---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
base_model: openai/whisper-small
model-index:
- name: Whisper Small DV
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- type: wer
value: 13.509754146816427
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small DV
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
- Wer Ortho: 62.8665
- Wer: 13.5098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1243 | 1.63 | 500 | 0.1709 | 62.8665 | 13.5098 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Tensoic/Phi-1-5-Open-Platypus | Tensoic | 2023-09-15T06:57:28Z | 57 | 5 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"custom_code",
"dataset:garage-bAInd/Open-Platypus",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-14T18:19:23Z | ---
datasets:
- garage-bAInd/Open-Platypus
---
Phi-1.5 Fine Tuned on the Open-Platypus Dataset

|
CyberHarem/chibana_sumika_watashinoyuriwaoshigotodesu | CyberHarem | 2023-09-15T06:43:12Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/chibana_sumika_watashinoyuriwaoshigotodesu",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T06:20:41Z | ---
license: mit
datasets:
- CyberHarem/chibana_sumika_watashinoyuriwaoshigotodesu
pipeline_tag: text-to-image
tags:
- art
---
# Lora of chibana_sumika_watashinoyuriwaoshigotodesu
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 10500, you need to download `10500/chibana_sumika_watashinoyuriwaoshigotodesu.pt` as the embedding and `10500/chibana_sumika_watashinoyuriwaoshigotodesu.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 10500**, with the score of 0.989. The trigger words are:
1. `chibana_sumika_watashinoyuriwaoshigotodesu`
2. `long_hair, blonde_hair, glasses, green_eyes, smile`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:----------|:----------|:---------------------------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| **10500** | **0.989** | [**Download**](10500/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10500/previews/nude.png) | [<NSFW, click to see>](10500/previews/nude2.png) |  |  |
| 9800 | 0.968 | [Download](9800/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9800/previews/nude.png) | [<NSFW, click to see>](9800/previews/nude2.png) |  |  |
| 9100 | 0.982 | [Download](9100/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9100/previews/nude.png) | [<NSFW, click to see>](9100/previews/nude2.png) |  |  |
| 8400 | 0.956 | [Download](8400/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8400/previews/nude.png) | [<NSFW, click to see>](8400/previews/nude2.png) |  |  |
| 7700 | 0.980 | [Download](7700/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7700/previews/nude.png) | [<NSFW, click to see>](7700/previews/nude2.png) |  |  |
| 7000 | 0.979 | [Download](7000/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7000/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7000/previews/nude.png) | [<NSFW, click to see>](7000/previews/nude2.png) |  |  |
| 6300 | 0.969 | [Download](6300/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6300/previews/nude.png) | [<NSFW, click to see>](6300/previews/nude2.png) |  |  |
| 5600 | 0.976 | [Download](5600/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5600/previews/nude.png) | [<NSFW, click to see>](5600/previews/nude2.png) |  |  |
| 4900 | 0.976 | [Download](4900/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4900/previews/nude.png) | [<NSFW, click to see>](4900/previews/nude2.png) |  |  |
| 4200 | 0.947 | [Download](4200/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4200/previews/nude.png) | [<NSFW, click to see>](4200/previews/nude2.png) |  |  |
| 3500 | 0.949 | [Download](3500/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3500/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3500/previews/nude.png) | [<NSFW, click to see>](3500/previews/nude2.png) |  |  |
| 2800 | 0.966 | [Download](2800/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2800/previews/nude.png) | [<NSFW, click to see>](2800/previews/nude2.png) |  |  |
| 2100 | 0.904 | [Download](2100/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2100/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2100/previews/nude.png) | [<NSFW, click to see>](2100/previews/nude2.png) |  |  |
| 1400 | 0.900 | [Download](1400/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [<NSFW, click to see>](1400/previews/nude2.png) |  |  |
| 700 | 0.885 | [Download](700/chibana_sumika_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](700/previews/bondage.png) |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [<NSFW, click to see>](700/previews/nude2.png) |  |  |
|
TigerResearch/tigerbot-13b-base-v1 | TigerResearch | 2023-09-15T06:42:55Z | 70 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-08-08T03:53:45Z | ---
license: apache-2.0
language:
- zh
- en
---
<div style="width: 100%;">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
## Github
https://github.com/TigerResearch/TigerBot
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TigerResearch/tigerbot-13b-base-v1")
model = AutoModelForCausalLM.from_pretrained("TigerResearch/tigerbot-13b-base-v1")
```
|
facebook/dinov2-base-imagenet1k-1-layer | facebook | 2023-09-15T06:40:46Z | 2,823 | 6 | transformers | [
"transformers",
"pytorch",
"dinov2",
"image-classification",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2304.07193",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-09-14T19:59:55Z | ---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the model for classifying an image among one of the [1000 ImageNet labels](https://huggingface.co/datasets/huggingface/label-files/blob/main/imagenet-1k-id2label.json). See the [model hub](https://huggingface.co/models?search=facebook/dinov2) to look for
other fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base-imagenet1k-1-layer')
model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-base-imagenet1k-1-layer')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
### BibTeX entry and citation info
```bibtex
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
aegon-h/phi-1_5 | aegon-h | 2023-09-15T06:31:09Z | 67 | 3 | transformers | [
"transformers",
"pytorch",
"mixformer-sequential",
"text-generation",
"microsoft",
"phi",
"custom_code",
"en",
"license:other",
"autotrain_compatible",
"region:us"
]
| text-generation | 2023-09-14T15:23:30Z | ---
license: other
language:
- en
pipeline_tag: text-generation
model_creator: microsoft
model_link: https://huggingface.co/microsoft/phi-1_5
model_name: phi-1_5
edited_by: agonh
tags:
- microsoft
- phi
---
# Phi-1_5
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [phi-1_5](https://huggingface.co/microsoft/phi-1_5)
## Description
This repo contains files for [microsoft's phi-1_5](https://huggingface.co/microsoft/phi-1_5).
### License
The model is licensed under the "Research License" |
CyberHarem/hinatsu_pokemon | CyberHarem | 2023-09-15T06:21:55Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/hinatsu_pokemon",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T06:02:24Z | ---
license: mit
datasets:
- CyberHarem/hinatsu_pokemon
pipeline_tag: text-to-image
tags:
- art
---
# Lora of hinatsu_pokemon
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 8120, you need to download `8120/hinatsu_pokemon.pt` as the embedding and `8120/hinatsu_pokemon.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 8120**, with the score of 0.859. The trigger words are:
1. `hinatsu_pokemon`
2. `short_hair, bangs, red_hair, red_eyes, smile, cowlick, blush, breasts`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-----------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-----------------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 8700 | 0.824 | [Download](8700/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8700/previews/pattern_9.png) | [<NSFW, click to see>](8700/previews/pattern_10.png) | [<NSFW, click to see>](8700/previews/pattern_11.png) | [<NSFW, click to see>](8700/previews/bikini.png) | [<NSFW, click to see>](8700/previews/bondage.png) | [<NSFW, click to see>](8700/previews/free.png) |  |  | [<NSFW, click to see>](8700/previews/nude.png) | [<NSFW, click to see>](8700/previews/nude2.png) |  |  |
| **8120** | **0.859** | [**Download**](8120/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8120/previews/pattern_9.png) | [<NSFW, click to see>](8120/previews/pattern_10.png) | [<NSFW, click to see>](8120/previews/pattern_11.png) | [<NSFW, click to see>](8120/previews/bikini.png) | [<NSFW, click to see>](8120/previews/bondage.png) | [<NSFW, click to see>](8120/previews/free.png) |  |  | [<NSFW, click to see>](8120/previews/nude.png) | [<NSFW, click to see>](8120/previews/nude2.png) |  |  |
| 7540 | 0.858 | [Download](7540/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7540/previews/pattern_9.png) | [<NSFW, click to see>](7540/previews/pattern_10.png) | [<NSFW, click to see>](7540/previews/pattern_11.png) | [<NSFW, click to see>](7540/previews/bikini.png) | [<NSFW, click to see>](7540/previews/bondage.png) | [<NSFW, click to see>](7540/previews/free.png) |  |  | [<NSFW, click to see>](7540/previews/nude.png) | [<NSFW, click to see>](7540/previews/nude2.png) |  |  |
| 6960 | 0.771 | [Download](6960/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6960/previews/pattern_9.png) | [<NSFW, click to see>](6960/previews/pattern_10.png) | [<NSFW, click to see>](6960/previews/pattern_11.png) | [<NSFW, click to see>](6960/previews/bikini.png) | [<NSFW, click to see>](6960/previews/bondage.png) | [<NSFW, click to see>](6960/previews/free.png) |  |  | [<NSFW, click to see>](6960/previews/nude.png) | [<NSFW, click to see>](6960/previews/nude2.png) |  |  |
| 6380 | 0.828 | [Download](6380/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6380/previews/pattern_9.png) | [<NSFW, click to see>](6380/previews/pattern_10.png) | [<NSFW, click to see>](6380/previews/pattern_11.png) | [<NSFW, click to see>](6380/previews/bikini.png) | [<NSFW, click to see>](6380/previews/bondage.png) | [<NSFW, click to see>](6380/previews/free.png) |  |  | [<NSFW, click to see>](6380/previews/nude.png) | [<NSFW, click to see>](6380/previews/nude2.png) |  |  |
| 5800 | 0.841 | [Download](5800/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5800/previews/pattern_9.png) | [<NSFW, click to see>](5800/previews/pattern_10.png) | [<NSFW, click to see>](5800/previews/pattern_11.png) | [<NSFW, click to see>](5800/previews/bikini.png) | [<NSFW, click to see>](5800/previews/bondage.png) | [<NSFW, click to see>](5800/previews/free.png) |  |  | [<NSFW, click to see>](5800/previews/nude.png) | [<NSFW, click to see>](5800/previews/nude2.png) |  |  |
| 5220 | 0.816 | [Download](5220/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5220/previews/pattern_9.png) | [<NSFW, click to see>](5220/previews/pattern_10.png) | [<NSFW, click to see>](5220/previews/pattern_11.png) | [<NSFW, click to see>](5220/previews/bikini.png) | [<NSFW, click to see>](5220/previews/bondage.png) | [<NSFW, click to see>](5220/previews/free.png) |  |  | [<NSFW, click to see>](5220/previews/nude.png) | [<NSFW, click to see>](5220/previews/nude2.png) |  |  |
| 4640 | 0.756 | [Download](4640/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4640/previews/pattern_9.png) | [<NSFW, click to see>](4640/previews/pattern_10.png) | [<NSFW, click to see>](4640/previews/pattern_11.png) | [<NSFW, click to see>](4640/previews/bikini.png) | [<NSFW, click to see>](4640/previews/bondage.png) | [<NSFW, click to see>](4640/previews/free.png) |  |  | [<NSFW, click to see>](4640/previews/nude.png) | [<NSFW, click to see>](4640/previews/nude2.png) |  |  |
| 4060 | 0.760 | [Download](4060/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4060/previews/pattern_9.png) | [<NSFW, click to see>](4060/previews/pattern_10.png) | [<NSFW, click to see>](4060/previews/pattern_11.png) | [<NSFW, click to see>](4060/previews/bikini.png) | [<NSFW, click to see>](4060/previews/bondage.png) | [<NSFW, click to see>](4060/previews/free.png) |  |  | [<NSFW, click to see>](4060/previews/nude.png) | [<NSFW, click to see>](4060/previews/nude2.png) |  |  |
| 3480 | 0.818 | [Download](3480/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3480/previews/pattern_9.png) | [<NSFW, click to see>](3480/previews/pattern_10.png) | [<NSFW, click to see>](3480/previews/pattern_11.png) | [<NSFW, click to see>](3480/previews/bikini.png) | [<NSFW, click to see>](3480/previews/bondage.png) | [<NSFW, click to see>](3480/previews/free.png) |  |  | [<NSFW, click to see>](3480/previews/nude.png) | [<NSFW, click to see>](3480/previews/nude2.png) |  |  |
| 2900 | 0.753 | [Download](2900/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2900/previews/pattern_9.png) | [<NSFW, click to see>](2900/previews/pattern_10.png) | [<NSFW, click to see>](2900/previews/pattern_11.png) | [<NSFW, click to see>](2900/previews/bikini.png) | [<NSFW, click to see>](2900/previews/bondage.png) | [<NSFW, click to see>](2900/previews/free.png) |  |  | [<NSFW, click to see>](2900/previews/nude.png) | [<NSFW, click to see>](2900/previews/nude2.png) |  |  |
| 2320 | 0.742 | [Download](2320/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2320/previews/pattern_9.png) | [<NSFW, click to see>](2320/previews/pattern_10.png) | [<NSFW, click to see>](2320/previews/pattern_11.png) | [<NSFW, click to see>](2320/previews/bikini.png) | [<NSFW, click to see>](2320/previews/bondage.png) | [<NSFW, click to see>](2320/previews/free.png) |  |  | [<NSFW, click to see>](2320/previews/nude.png) | [<NSFW, click to see>](2320/previews/nude2.png) |  |  |
| 1740 | 0.747 | [Download](1740/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1740/previews/pattern_9.png) | [<NSFW, click to see>](1740/previews/pattern_10.png) | [<NSFW, click to see>](1740/previews/pattern_11.png) | [<NSFW, click to see>](1740/previews/bikini.png) | [<NSFW, click to see>](1740/previews/bondage.png) | [<NSFW, click to see>](1740/previews/free.png) |  |  | [<NSFW, click to see>](1740/previews/nude.png) | [<NSFW, click to see>](1740/previews/nude2.png) |  |  |
| 1160 | 0.730 | [Download](1160/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1160/previews/pattern_9.png) | [<NSFW, click to see>](1160/previews/pattern_10.png) | [<NSFW, click to see>](1160/previews/pattern_11.png) | [<NSFW, click to see>](1160/previews/bikini.png) | [<NSFW, click to see>](1160/previews/bondage.png) | [<NSFW, click to see>](1160/previews/free.png) |  |  | [<NSFW, click to see>](1160/previews/nude.png) | [<NSFW, click to see>](1160/previews/nude2.png) |  |  |
| 580 | 0.603 | [Download](580/hinatsu_pokemon.zip) |  |  |  |  |  |  |  |  | [<NSFW, click to see>](580/previews/pattern_9.png) | [<NSFW, click to see>](580/previews/pattern_10.png) | [<NSFW, click to see>](580/previews/pattern_11.png) | [<NSFW, click to see>](580/previews/bikini.png) | [<NSFW, click to see>](580/previews/bondage.png) | [<NSFW, click to see>](580/previews/free.png) |  |  | [<NSFW, click to see>](580/previews/nude.png) | [<NSFW, click to see>](580/previews/nude2.png) |  |  |
|
Sachin9474/cart_detection | Sachin9474 | 2023-09-15T06:16:56Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-09-15T04:52:22Z | widget:
- text: "Jens Peter Hansen kommer fra Danmark" |
vstudent/dummy-model | vstudent | 2023-09-15T05:39:37Z | 59 | 0 | transformers | [
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-09-15T05:38:40Z | ---
license: mit
base_model: camembert-base
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
myxy/SConv-Wiki-500M | myxy | 2023-09-15T05:38:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2023-09-15T02:47:54Z | ---
license: apache-2.0
---
## myxy/SConv-Wiki-500M
[Repository](https://github.com/myxyy/SConv)
## Usage
最初にコンテナを立ち上げてください。
```
docker compose up -d
docker exec -it sconv bash
```
`weight.ckpt`を`sconv/weight/`フォルダに配置し、次のコマンドで文章生成スクリプトが起動します。
```
make predict
```
## Branch
https://github.com/myxyy/SConv/tree/c059ab44c21aba0f7e6ee22ed8d6aeae348ac161 のブランチで動作を確認しています |
nigelyeap/ppo-LunarLander-v2 | nigelyeap | 2023-09-15T05:29:58Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-09-15T05:29:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.32 +/- 14.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kear24100712/katherinia123 | kear24100712 | 2023-09-15T05:28:42Z | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
]
| text-to-image | 2023-09-13T20:46:33Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: katherinia123
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Cartinoe5930/lima-2-7b-GPTQ | Cartinoe5930 | 2023-09-15T05:26:20Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T05:19:47Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.01
- desc_act: False
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- disable_exllama: True
### Framework versions
- PEFT 0.5.0
|
takumi12/id2pg_pattern1_en_batchsize8_epoch5 | takumi12 | 2023-09-15T05:20:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T05:20:19Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
CyberHarem/mamiya_kanoko_watashinoyuriwaoshigotodesu | CyberHarem | 2023-09-15T05:19:38Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/mamiya_kanoko_watashinoyuriwaoshigotodesu",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T04:57:44Z | ---
license: mit
datasets:
- CyberHarem/mamiya_kanoko_watashinoyuriwaoshigotodesu
pipeline_tag: text-to-image
tags:
- art
---
# Lora of mamiya_kanoko_watashinoyuriwaoshigotodesu
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5440, you need to download `5440/mamiya_kanoko_watashinoyuriwaoshigotodesu.pt` as the embedding and `5440/mamiya_kanoko_watashinoyuriwaoshigotodesu.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5440**, with the score of 0.829. The trigger words are:
1. `mamiya_kanoko_watashinoyuriwaoshigotodesu`
2. `blue_hair, green_eyes, blush, bangs, closed_mouth, short_hair, purple_hair`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:-------------------------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------|
| 10200 | 0.829 | [Download](10200/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](10200/previews/bondage.png) |  |  |  | [<NSFW, click to see>](10200/previews/nude.png) | [<NSFW, click to see>](10200/previews/nude2.png) |  |  |
| 9520 | 0.828 | [Download](9520/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9520/previews/nude.png) | [<NSFW, click to see>](9520/previews/nude2.png) |  |  |
| 8840 | 0.764 | [Download](8840/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8840/previews/nude.png) | [<NSFW, click to see>](8840/previews/nude2.png) |  |  |
| 8160 | 0.799 | [Download](8160/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8160/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8160/previews/nude.png) | [<NSFW, click to see>](8160/previews/nude2.png) |  |  |
| 7480 | 0.827 | [Download](7480/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7480/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7480/previews/nude.png) | [<NSFW, click to see>](7480/previews/nude2.png) |  |  |
| 6800 | 0.794 | [Download](6800/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6800/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6800/previews/nude.png) | [<NSFW, click to see>](6800/previews/nude2.png) |  |  |
| 6120 | 0.797 | [Download](6120/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6120/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6120/previews/nude.png) | [<NSFW, click to see>](6120/previews/nude2.png) |  |  |
| **5440** | **0.829** | [**Download**](5440/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5440/previews/nude.png) | [<NSFW, click to see>](5440/previews/nude2.png) |  |  |
| 4760 | 0.799 | [Download](4760/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) |  |  |
| 4080 | 0.816 | [Download](4080/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4080/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) |  |  |
| 3400 | 0.733 | [Download](3400/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3400/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) |  |  |
| 2720 | 0.821 | [Download](2720/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2720/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) |  |  |
| 2040 | 0.786 | [Download](2040/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2040/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) |  |  |
| 1360 | 0.693 | [Download](1360/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1360/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) |  |  |
| 680 | 0.758 | [Download](680/mamiya_kanoko_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) |  |  |
|
sd-concepts-library/ahx-beta-503da17 | sd-concepts-library | 2023-09-15T04:53:43Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-09-15T04:53:40Z | ---
license: mit
---
### ahx-beta-503da17 on Stable Diffusion
This is the `<ahx-beta-503da17>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










|
johaanm/test-planner-alpha-V8.2 | johaanm | 2023-09-15T04:49:26Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T04:49:21Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
insiderakash/rvc-train-modal | insiderakash | 2023-09-15T04:40:19Z | 0 | 0 | null | [
"music",
"en",
"region:us"
]
| null | 2023-09-15T04:34:07Z | ---
language:
- en
tags:
- music
--- |
mmenendezg/detr-resnet-50_finetuned_cppe5 | mmenendezg | 2023-09-15T04:35:24Z | 212 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| object-detection | 2023-09-14T02:28:11Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Leiyan525/dnd-model-lora-en | Leiyan525 | 2023-09-15T04:12:14Z | 4 | 4 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-09-15T03:09:45Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Leiyan525/dnd-model-lora-en
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the 0xJustin/Dungeons-and-Diffusion dataset. You can find some example images in the following.




|
CyberHarem/yano_mitsuki_watashinoyuriwaoshigotodesu | CyberHarem | 2023-09-15T03:59:16Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/yano_mitsuki_watashinoyuriwaoshigotodesu",
"license:mit",
"region:us"
]
| text-to-image | 2023-09-15T03:38:23Z | ---
license: mit
datasets:
- CyberHarem/yano_mitsuki_watashinoyuriwaoshigotodesu
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yano_mitsuki_watashinoyuriwaoshigotodesu
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 7920, you need to download `7920/yano_mitsuki_watashinoyuriwaoshigotodesu.pt` as the embedding and `7920/yano_mitsuki_watashinoyuriwaoshigotodesu.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 7920**, with the score of 0.966. The trigger words are:
1. `yano_mitsuki_watashinoyuriwaoshigotodesu`
2. `long_hair, purple_hair, brown_eyes, hair_between_eyes`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 9900 | 0.956 | [Download](9900/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9900/previews/nude.png) | [<NSFW, click to see>](9900/previews/nude2.png) |  |  |
| 9240 | 0.921 | [Download](9240/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](9240/previews/bondage.png) |  |  |  | [<NSFW, click to see>](9240/previews/nude.png) | [<NSFW, click to see>](9240/previews/nude2.png) |  |  |
| 8580 | 0.959 | [Download](8580/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](8580/previews/bondage.png) |  |  |  | [<NSFW, click to see>](8580/previews/nude.png) | [<NSFW, click to see>](8580/previews/nude2.png) |  |  |
| **7920** | **0.966** | [**Download**](7920/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7920/previews/nude.png) | [<NSFW, click to see>](7920/previews/nude2.png) |  |  |
| 7260 | 0.962 | [Download](7260/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](7260/previews/bondage.png) |  |  |  | [<NSFW, click to see>](7260/previews/nude.png) | [<NSFW, click to see>](7260/previews/nude2.png) |  |  |
| 6600 | 0.965 | [Download](6600/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) |  |  |
| 5940 | 0.959 | [Download](5940/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5940/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5940/previews/nude.png) | [<NSFW, click to see>](5940/previews/nude2.png) |  |  |
| 5280 | 0.955 | [Download](5280/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5280/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) |  |  |
| 4620 | 0.947 | [Download](4620/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4620/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4620/previews/nude.png) | [<NSFW, click to see>](4620/previews/nude2.png) |  |  |
| 3960 | 0.900 | [Download](3960/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3960/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) |  |  |
| 3300 | 0.902 | [Download](3300/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3300/previews/nude.png) | [<NSFW, click to see>](3300/previews/nude2.png) |  |  |
| 2640 | 0.938 | [Download](2640/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2640/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) |  |  |
| 1980 | 0.934 | [Download](1980/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1980/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1980/previews/nude.png) | [<NSFW, click to see>](1980/previews/nude2.png) |  |  |
| 1320 | 0.914 | [Download](1320/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1320/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) |  |  |
| 660 | 0.871 | [Download](660/yano_mitsuki_watashinoyuriwaoshigotodesu.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](660/previews/bondage.png) |  |  |  | [<NSFW, click to see>](660/previews/nude.png) | [<NSFW, click to see>](660/previews/nude2.png) |  |  |
|
huyen89/MGTDetectionModel | huyen89 | 2023-09-15T03:57:49Z | 103 | 2 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"arxiv:2303.14822",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-07-04T14:04:24Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is used for detecting Machine-generated Texts.
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
This model trains on a corpus of both human and StableLM-generated answers for questions from SQuAD1 dataset. The dataset can be found [here](https://drive.google.com/drive/folders/1p4iBeM4r-sUKe8TnS4DcYlxvQagcmola).
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
This model is created by fine-tuning [Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) model. The training procedure follows [He et al.](https://arxiv.org/abs/2303.14822)'s instructions.
|
hattran/gpt2-vn-2-PROMPT_TUNING_CAUSAL_LM | hattran | 2023-09-15T03:49:43Z | 3 | 0 | peft | [
"peft",
"region:us"
]
| null | 2023-09-15T03:49:40Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Subsets and Splits