modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-10 00:42:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 515
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-10 00:41:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lokaspire/ppo-LunarLander-v2 | lokaspire | 2023-08-18T09:08:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T09:07:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.47 +/- 24.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
goofyai/Leonardo_Ai_Style_Illustration | goofyai | 2023-08-18T09:07:20Z | 13,043 | 44 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] | text-to-image | 2023-08-18T08:33:06Z | ---
license: apache-2.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: leonardo style,illustration,vector art
widget:
- text: leonardo style llama
---
# Leonardo Ai Style Illustraion
## Support me in upgrading my 3060 to a 40xx GPU as my current GPU struggles with SDXL training [Buymeacoffee](https://www.buymeacoffee.com/goofy02)
|  |  |
|:----------------------:|:----------------:|
|  |  |
### Tips:
- Prompt with `leonardo style`, `illustration` or `vector art` activation prompts
- Lora weight of 0.7-1 works great
- Highres fix is highly recommended. |
hridayM/Taxi-v3 | hridayM | 2023-08-18T09:06:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T09:06:23Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hridayM/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
freeman/finetuning-sentiment-model-3000-samples | freeman | 2023-08-18T09:01:36Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T08:44:22Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8618421052631579
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3310
- Accuracy: 0.86
- F1: 0.8618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
hridayM/q-FrozenLake-v1-4x4-noSlippery | hridayM | 2023-08-18T09:01:03Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T09:01:01Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hridayM/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Abzu/orca-mini-v3-70b-gptq-q4 | Abzu | 2023-08-18T08:55:11Z | 5 | 1 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:psmathur/orca_mini_v1_dataset",
"dataset:ehartford/dolphin",
"arxiv:2306.02707",
"license:llama2",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-08-18T08:55:11Z | ---
datasets:
- psmathur/orca_mini_v1_dataset
- ehartford/dolphin
inference: false
language:
- en
library_name: transformers
license: llama2
model_creator: Pankaj Mathur
model_link: https://huggingface.co/psmathur/orca_mini_v3_70b
model_name: Orca Mini v3 70B
model_type: llama
pipeline_tag: text-generation
quantized_by: TheBloke
duplicated_from: TheBloke/orca_mini_v3_70B-GPTQ
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Orca Mini v3 70B - GPTQ
- Model creator: [Pankaj Mathur](https://huggingface.co/psmathur)
- Original model: [Orca Mini v3 70B](https://huggingface.co/psmathur/orca_mini_v3_70b)
## Description
This repo contains GPTQ model files for [Pankaj Mathur's Orca Mini v3 70B](https://huggingface.co/psmathur/orca_mini_v3_70b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML)
* [Pankaj Mathur's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v3_70b)
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
## Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All GPTQ files are made with AutoGPTQ.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have issues with models that use Act Order plus Group Size.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 35.33 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 40.66 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 37.99 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.65 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 26.77 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 28.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/orca_mini_v3_70B-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/orca_mini_v3_70B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/orca_mini_v3_70B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `orca_mini_v3_70B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) 0.3.1 or later installed:
```
pip3 install auto-gptq
```
If you have problems installing AutoGPTQ, please build from source instead:
```
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
pip3 install .
```
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/orca_mini_v3_70B-GPTQ"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
# To download from a specific branch, use the revision parameter, as in this example:
# Note that `revision` requires AutoGPTQ 0.3.1 or later!
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''### System:
{system_message}
### User:
{prompt}
### Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann-Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini v3 70B
# orca_mini_v3_70b
A Llama2-70b model trained on Orca Style datasets.
### quantized versions
Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
1) https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML
2) https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ
#### license disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Evaluation
We evaluated orca_mini_v3_70b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|||||
|:------:|:--------:|:-------:|:--------:|
|**Task**|**Metric**|**Value**|**Stderr**|
|*arc_challenge*|acc_norm|0.7098|0.0132|
|*hellaswag*|acc_norm|0.8779|0.0032|
|*mmlu*|acc_norm|0.6904|0.0351|
|*truthfulqa_mc*|mc2|0.6196|0.0151|
|**Total Average**|-|**0.722175**||
**P.S. I am actively seeking sponsorship and partnership opportunities. If you're interested, please connect with me at www.linkedin.com/in/pankajam.**
## Example Usage
Here is the prompt format
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
Tell me about Orcas.
### Assistant:
```
Below shows a code example on how to use this model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("psmathur/orca_mini_v3_70b")
model = AutoModelForCausalLM.from_pretrained(
"psmathur/orca_mini_v3_70b",
torch_dtype=torch.float16,
load_in_8bit=True,
low_cpu_mem_usage=True,
device_map="auto"
)
system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
#generate text steps
instruction = "Tell me about Orcas."
prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary.
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{orca_mini_v3_70b,
author = {Pankaj Mathur},
title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
|
freeman/my_awesome_model | freeman | 2023-08-18T08:52:24Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T07:17:20Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.92724
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1968
- Accuracy: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2147 | 1.0 | 1563 | 0.1968 | 0.9272 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.3
- Tokenizers 0.13.2
|
aeft/Pixelcopter-PLE-v0 | aeft | 2023-08-18T08:51:58Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T08:51:55Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.20 +/- 46.67
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jerome1519/t5-small-finetuned-coding_instructions_2023_08_18__08_41 | jerome1519 | 2023-08-18T08:43:22Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-18T08:41:17Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-coding_instructions_2023_08_18__08_41
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-coding_instructions_2023_08_18__08_41
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9209
- Rouge1: 13.9516
- Rouge2: 6.1527
- Rougel: 13.1037
- Rougelsum: 13.1244
- Gen Len: 18.3077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 5 | 2.6656 | 8.6104 | 3.1562 | 8.1185 | 8.1422 | 19.0 |
| No log | 2.0 | 10 | 2.5149 | 9.7852 | 3.836 | 9.3185 | 9.3322 | 19.0 |
| No log | 3.0 | 15 | 2.3683 | 13.1134 | 5.2015 | 12.1364 | 12.2677 | 19.0 |
| No log | 4.0 | 20 | 2.2032 | 13.4182 | 5.1369 | 12.5255 | 12.6118 | 19.0 |
| No log | 5.0 | 25 | 2.0986 | 13.6902 | 5.3556 | 12.7848 | 12.898 | 19.0 |
| No log | 6.0 | 30 | 2.0232 | 12.7675 | 4.8786 | 11.9464 | 11.9539 | 18.3846 |
| No log | 7.0 | 35 | 1.9857 | 13.9444 | 6.1527 | 13.0926 | 13.1171 | 18.5385 |
| No log | 8.0 | 40 | 1.9526 | 13.9516 | 6.1527 | 13.1037 | 13.1244 | 18.5385 |
| No log | 9.0 | 45 | 1.9303 | 13.9516 | 6.1527 | 13.1037 | 13.1244 | 18.3077 |
| No log | 10.0 | 50 | 1.9209 | 13.9516 | 6.1527 | 13.1037 | 13.1244 | 18.3077 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
PrinceAyush/Support-chatbot-llama7b | PrinceAyush | 2023-08-18T08:38:46Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-30T20:24:40Z | Mental Health Support Chatbot Project
This project focuses on building a Mental Health Support Chatbot using state-of-the-art technologies, including Llama 3B language model, PEFT, LORA, and 8-bit model quantization. The chatbot aims to provide empathetic and non-judgmental responses to individuals seeking mental health advice, promoting emotional well-being and support. The project comprises Data Preparation, Model Training, and Quantization of the Model.
Data Preparation
The data preparation phase involved collecting and preprocessing a dataset containing mental health conversations from various sources. The dataset consists of 6,365 rows of dialogues related to mental health. To ensure optimal training, the dataset was cleaned and formatted, removing noise, special characters, and irrelevant information.
To enhance the model's performance, data augmentation techniques were employed. Domain-specific language models were utilized to generate additional conversation examples, enabling the chatbot to respond effectively to a wider range of user queries.
Model Training
For the model training, the Llama 3B language model was chosen due to its exceptional performance in natural language understanding. The model was fine-tuned on the prepared mental health dataset using hyperparameters such as batch size, learning rate, and gradient accumulation steps. The training process aimed to optimize the model's ability to generate appropriate and supportive responses based on user prompts.
PEFT and LORA
In this project, PEFT (Parallel Efficient Transformers) and LORA (Locally Recurrent Adaptive Mechanism) techniques were incorporated to enhance the model's efficiency and performance. PEFT improves the model's scalability and training speed on multi-GPU systems. LORA, on the other hand, enhances the model's ability to capture long-range dependencies in the conversation context.
Model Quantization
Due to resource constraints, the model was quantized in 8-bit format using model quantization techniques. Quantization reduces the model size and memory footprint, making it more feasible to deploy on devices with limited resources. The chatbot achieved satisfactory performance with the quantized model, allowing it to run efficiently on systems with lower RAM and GPU capacity.
Model Training Environment
The model was trained on Google Colab, utilizing a virtual machine with 12GB CPU and 12GB T4 GPU RAM. Despite the resource limitations, the model training process yielded desirable results, demonstrating the effectiveness of the applied techniques in creating a functional and resource-efficient chatbot.
Drawbacks of Model Quantization
While 8-bit model quantization provides significant benefits in terms of model size and resource consumption, it may result in a slight decrease in the model's precision and accuracy. The quantized model might not retain the exact same performance as the full-precision model. However, for the purposes of this project and the target application, the trade-off in performance is acceptable given the hardware constraints.
How to Run the Application
To experience the Mental Health Support Chatbot application, follow these steps:
Step 1: Install the required dependencies by executing the following command in your terminal or command prompt:
pip install -r requirements.txt
Step 2: Execute the runApp.py script:
python runApp.py
Please note that the application requires a minimum system specification of 8 GB RAM and 6 GB of GPU to run efficiently.
Test Prompts
Here are some example prompts that were tested on the Mental Health Support Chatbot:
"I've been feeling really anxious lately. What should I do to cope with it?"
"I'm feeling hopeless and don't see any point in living anymore."
"I can't sleep at night, and it's affecting my daily life."
"I'm having trouble concentrating, and I feel so overwhelmed."
"My friend told me they're feeling suicidal. What can I do to help them?"
Conclusion
The Mental Health Support Chatbot project showcases the successful implementation of advanced technologies like PEFT, LORA, and 8-bit model quantization to build an efficient and supportive chatbot. While the model's quantization presents some trade-offs, it allows the chatbot to run effectively on devices with limited resources, making it accessible to a broader audience.
We encourage further exploration and improvement of the chatbot by leveraging larger and more diverse datasets and fine-tuning hyperparameters. Additionally, user feedback and continuous development will help enhance the chatbot's capabilities, providing better mental health support to users.
Finally, we express our gratitude to cofactoryai for their invaluable contribution by providing the frontend interface for the application, ensuring a user-friendly experience for the Mental Health Support Chatbot.
Note: The chatbot is not a substitute for professional mental health advice or therapy. Users with severe mental health concerns should seek help from qualified professionals.
Important Note:
Running runApp.py may take some time, depending on your internet bandwidth, because the LLaMA model and its configuration need to be downloaded. The LLaMA model is about 6GB in size, and the download time will vary based on the speed of your internet connection.
Please be patient during the download process, and ensure that you have a stable and fast internet connection to minimize the waiting time. Once the model is downloaded, subsequent runs of the application will be faster, as the model will be cached locally on your system.
If you encounter any issues during the download or if the process takes longer than expected, please check your internet connection and ensure that you have sufficient storage space on your system to accommodate the model files.
Feel free to reach out for assistance or any questions you may have during the setup and running of the application. Enjoy exploring the capabilities of the LLaMA model for Mental Health Support Chatbot!
### Framework versions
- PEFT 0.4.0
|
JabrilJacobs/rl_course_vizdoom_health_gathering_supreme | JabrilJacobs | 2023-08-18T08:27:40Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T08:27:32Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.35 +/- 5.36
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JabrilJacobs/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hhs8746/ttest8746 | hhs8746 | 2023-08-18T08:25:13Z | 5 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T08:24:49Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
hihisu1231/mbti_plus3 | hihisu1231 | 2023-08-18T08:21:30Z | 93 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-18T08:17:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: "polyglot-1.3b-koalpaca-v1.1a-rtx3090_\uB367\uBD99\uC774\uB294\uB2F5\uBCC0\
_ver"
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# polyglot-1.3b-koalpaca-v1.1a-rtx3090_덧붙이는답변_ver
This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
aeft/Reinforce-Cartpole-v1 | aeft | 2023-08-18T08:11:33Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T08:11:25Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
SlimeCore/Maeve-Paladins | SlimeCore | 2023-08-18T08:10:41Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-08-18T07:42:17Z | ---
license: openrail
---
Paladins is part of © 2023 Copyright Hi-Rez Studios, INC. all rights are reserved to them.
If theres any copyright issues ill delete the model.
Dataset is taken from the Wiki: https://paladins.fandom.com/wiki/Maeve_voice_lines
---
|
rukeshsekar/phibert-finetuned-ner | rukeshsekar | 2023-08-18T07:58:03Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-08-18T07:05:07Z | ---
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: phibert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phibert-finetuned-ner
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0269
- Precision: 0.9282
- Recall: 0.9289
- F1: 0.9285
- Accuracy: 0.9952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0407 | 1.0 | 5783 | 0.0448 | 0.8798 | 0.8846 | 0.8822 | 0.9920 |
| 0.02 | 2.0 | 11566 | 0.0298 | 0.9165 | 0.9144 | 0.9154 | 0.9939 |
| 0.0064 | 3.0 | 17349 | 0.0269 | 0.9282 | 0.9289 | 0.9285 | 0.9952 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jerome1519/flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51 | jerome1519 | 2023-08-18T07:52:20Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-18T07:51:24Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-finetuned-coding_instructions_2023_08_18__07_51
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 5 | nan | 10.5263 | 8.1081 | 10.5263 | 10.5263 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
sitingGZ/german-bert-clinical-ner | sitingGZ | 2023-08-18T07:52:03Z | 0 | 0 | null | [
"de",
"dataset:bigbio/muchmore",
"license:afl-3.0",
"region:us"
] | null | 2023-08-16T15:00:13Z | ---
license: afl-3.0
datasets:
- bigbio/muchmore
language:
- de
---
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Finetuned from model: bert-base-german-cased
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository: https://github.com/sitingGZ/bert-sner
- **Paper : [BERT-SNER](https://aclanthology.org/2023.clinicalnlp-1.31/)
- **Demo (Coming soon)
## Uses
import sys
sys.path.append('modules')
import torch
from transformers import AutoConfig, AutoTokenizer, AutoModelForMaskedLM, EncoderDecoderConfig
from BERT2span_semantic_disam import BERT2span
from helpers import load_config, set_seed
from inference import final_label_results_rescaled
base_name = "bert-base-german-cased"
configs = load_config('configs/step3_gpu_span_semantic_group.yaml')
tokenizer = AutoTokenizer.from_pretrained(base_name)
bertMLM = AutoModelForMaskedLM.from_pretrained(base_name)
bert_sner = BERT2span(configs, bertMLM, tokenizer)
checkpoint_path = "checkpoints/german_bert_ex4cds_500_semantic_term.ckpt"
state_dict = torch.load(checkpoint_path, map_location=torch.device('cpu'))
bert_sner.load_state_dict(state_dict)
bert_sner.eval()
suggested_terms = {'Condition': 'Zeichen oder Symptom',
'DiagLab': 'Diagnostisch und Laborverfahren',
'LabValues': 'Klinisches Attribut',
'HealthState': 'Gesunder Zustand',
'Measure': 'Quantitatives Konzept',
'Medication': 'Pharmakologische Substanz',
'Process': 'Physiologische Funktion',
'TimeInfo': 'Zeitliches Konzept'}
words = "Aktuell Infekt mit Nachweis von E Coli und Pseudomonas im TBS- CRP 99mg/dl".split()
words_list = [words]
heatmaps, ner_results = final_label_results_rescaled(words_list, tokenizer, bert_sner, suggested_terms, threshold=0.5)
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
|
mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-ggml | mmnga | 2023-08-18T07:48:32Z | 0 | 1 | null | [
"ja",
"license:apache-2.0",
"region:us"
] | null | 2023-08-18T07:03:49Z | ---
license: apache-2.0
language:
- ja
---
# line-corporation/japanese-large-lm-3.6b-instruction-sft
[line-corporationさんが公開しているjapanese-large-lm-3.6b-instruction-sft](https://huggingface.co/line-corporation/japanese-large-lm-3.6b-instruction-sft)のggml変換版です。
## Usage
```
git clone https://github.com/ggerganov/ggml.git
cd ggml
mkdir build && cd build
cmake ..
make -j
./bin/gpt-neox -m 'line-corp-japanese-large-lm-3.6b-instruction-sft-ggml-q4_0.bin' -n 128 -t 8 -p 'ユーザー: 四国の県名を全て列挙してください。\nシステム: '
```
|
Mel-Iza0/Llama2-7B_ZeroShot-20K_classe_bias_port | Mel-Iza0 | 2023-08-18T07:32:11Z | 1 | 0 | peft | [
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-12T14:59:51Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
ybkim95-mit/medalpaca-pmdata-readiness25 | ybkim95-mit | 2023-08-18T07:29:42Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:14:11Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ybkim95-mit/medalpaca-pmdata-stress25 | ybkim95-mit | 2023-08-18T07:29:01Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:13:46Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ybkim95-mit/medalpaca-pmdata-sleep_quality10 | ybkim95-mit | 2023-08-18T07:25:50Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:14:26Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ybkim95-mit/medalpaca-pmdata-sleep_quality25 | ybkim95-mit | 2023-08-18T07:25:05Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:14:32Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ybkim95-mit/medalpaca-globem-depression3 | ybkim95-mit | 2023-08-18T07:23:53Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:15:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
ybkim95-mit/medalpaca-globem-depression10 | ybkim95-mit | 2023-08-18T07:23:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:15:13Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
iamplus/mpt-30b-v2 | iamplus | 2023-08-18T07:21:45Z | 13 | 10 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:ehartford/dolphin",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T17:44:07Z | ---
datasets:
- ehartford/dolphin
license: apache-2.0
---
**Base Model :** mosaicml/mpt-30b
**Tool :** MosaicML's llm-foundry (https://github.com/mosaicml/llm-foundry)
**Dataset :** Entire flan3m-GPT3.5 dataset.
**Config yaml with Model Params :** https://huggingface.co/iamplus/mpt-30b-v2/blob/main/mpt-30b_orca.yaml
**Prompt Format :**
```
<system>: [system prompt]
<human>: [question]
<bot>:
``` |
ybkim95-mit/medalpaca-lifesnaps-calories3 | ybkim95-mit | 2023-08-18T07:21:19Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:16:13Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
iamplus/mpt-30b-v3 | iamplus | 2023-08-18T07:20:56Z | 11 | 2 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:ehartford/dolphin",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-10T07:25:53Z | ---
datasets:
- ehartford/dolphin
license: apache-2.0
---
**Base Model :** iamplus/mpt-30b-v2
**Tool :** MosaicML's llm-foundry (https://github.com/mosaicml/llm-foundry)
**Dataset :** Entire flan1m-GPT4 dataset
**Config yaml with Model Params :** https://huggingface.co/iamplus/mpt-30b-v3/blob/main/mpt-30b_orca.yaml
***Description :*** **mosaicml/mpt-30b** -> Finetuning on (Entire flan3m-GPT3.5 dataset for 1 epoch) -> **iamplus/mpt-30b-v2** -> Finetuning on (Entire flan1m-GPT4 dataset for 1 epoch) -> **iamplus/mpt-30b-v3**
**Prompt Format :**
```
<system>: [system prompt]
<human>: [question]
<bot>:
``` |
ybkim95-mit/medalpaca-lifesnaps-calories10 | ybkim95-mit | 2023-08-18T07:20:33Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:16:18Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
Ebo88/llama2-70b-toplink-finetunined-french | Ebo88 | 2023-08-18T07:19:44Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:06:41Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
|
ybkim95-mit/medalpaca-pmdata-stress3 | ybkim95-mit | 2023-08-18T07:19:42Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T07:13:33Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
iamplus/gpt-neoxt-20b-v9 | iamplus | 2023-08-18T07:14:53Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"dataset:iamplus/Conversational_Data",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-04T11:34:32Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
- iamplus/Conversational_Data
---
Instruction Tuned GPT-NeoXT-20B model on Instruction Tuning dataset as listed below (~5.2M data) using ***Colossal AI***
**Base Model:** togethercomputer/GPT-NeoXT-Chat-Base-20B (GPT-NeoXT-Chat-Base-20B-v0.16 - fine-tuned on feedback data)
**Training Details :**
* Epochs: 2
* Batch Size : 5 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 40
* Block Size : 2020
* Weight Decay : 0
* Learning Rate : 1e-6
* Learning Rate Scheduler Type : Cosine
* Number of warmup steps : 600
* Machine : 8xA100 80GB
**Training Data Specifics :**
* Labels are similar to Input ids but with "human" responses and pad tokens masked so that they don't contribute during the model's error calculation.
* Block Size is 2020, Multiple instructions are clubbed together in each data.
* "###" is the EOS Token used in the data. |
iamplus/gpt-neoxt-20b-v6 | iamplus | 2023-08-18T07:13:44Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-13T21:59:22Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
---
Instruction Tuned GPT-NeoXT-20B model on Instruction Tuning dataset as listed below (~560k data) using ***Colossal AI***
**Base Model:** togethercomputer/GPT-NeoXT-Chat-Base-20B (GPT-NeoXT-Chat-Base-20B-v0.16 - fine-tuned on feedback data)
**Training Details :**
* Epochs: 5
* Batch Size : 16 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 128
* Max Length : 1024
* Weight Decay : 0
* Learning Rate : 2e-5
* Learning Rate Scheduler Type : Cosine
* Number of warmup steps : 240
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/Instruction_Tuning
Files :
* stanford_alpaca_it_v2.csv
* ColossalChat.csv
* unified_chip2.csv
* iamai_summarization_v1.csv
* iamai_v1.csv |
iamplus/gpt-neoxt-20b-v4 | iamplus | 2023-08-18T07:13:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-04-05T06:26:40Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
---
Instruction Tuned GPT-NeoXT-20B model on Instruction Tuning dataset as listed below (~560k data) using ***Colossal AI***
**Base Model:** togethercomputer/GPT-NeoXT-Chat-Base-20B (GPT-NeoXT-Chat-Base-20B-v0.16 - fine-tuned on feedback data)
**Training Details :**
* Epochs: 2
* Batch Size : 16 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 128
* Max Length : 1024
* Weight Decay : 0
* Learning Rate : 2e-5
* Learning Rate Scheduler Type : Cosine
* Number of warmup steps : 240
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/Instruction_Tuning
Files :
* stanford_alpaca_it_v2.csv
* ColossalChat.csv
* unified_chip2.csv
* iamai_summarization_v1.csv
* iamai_v1.csv |
iamplus/bloomz-7b1-v3 | iamplus | 2023-08-18T07:12:58Z | 3 | 0 | transformers | [
"transformers",
"bloom",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-20T06:30:03Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
---
Instruction Tuned Bloomz-7B1 model on ChatGPT dataset (85k data) using ***Colossal AI***
**Base Model:** bigscience/bloomz-7b1
**Training Details :**
* Epochs: 5
* Batch Size : 32 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 256
* Max Length : 512
* Weight Decay : 0
* Learning Rate : 2e-5
* Learning Rate Scheduler Type : Cosine
* Number of warmup steps : 0
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/Instruction_Tuning
Files :
* chat_gpt_v1.csv |
iamplus/gpt-neoxt-20b-v2 | iamplus | 2023-08-18T07:12:33Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-25T21:20:40Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
---
Instruction Tuned GPT-NeoXT-20B model on Stanford Alpaca-2 Instruction Tuning dataset (outputs from ChatGPT) (52k data) using ***Colossal AI***
**Base Model:** togethercomputer/GPT-NeoXT-Chat-Base-20B (not fine-tuned on feedback data)
**Training Details :**
* Epochs: 5
* Batch Size : 16 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 128
* Max Length : 1024
* Weight Decay : 0
* Learning Rate : 2e-5
* Learning Rate Scheduler Type : Cosine
* Number of warmup steps : 30
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/Instruction_Tuning
Files :
* stanford_alpaca_it_v2.csv |
iamplus/bloomz-7b1-stanford-alpaca-v2 | iamplus | 2023-08-18T07:11:35Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-20T06:46:05Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
---
Instruction Tuned Bloomz-7B1 model on Stanford Alpaca Instruction Tuning dataset (52k data) using ***Colossal AI***
**Base Model:** bigscience/bloomz-7b1
**Training Details :**
* Epochs: 5
* Batch Size : 32 instantaneous per device x 1 gradient accumulation steps x 8 gpus = 256
* Max Length : 512
* Weight Decay : 0
* Learning Rate : 2e-5
* Learning Rate Scheduler Type : Cosine
* Number of warmup steps : 0
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/Instruction_Tuning
Files :
* stanford_alpaca_it.csv |
iamplus/bloomz-7b1-cot-v1 | iamplus | 2023-08-18T07:11:15Z | 4 | 0 | transformers | [
"transformers",
"bloom",
"text-generation",
"dataset:iamplus/CoT",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-12T05:54:59Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/CoT
---
First Version of Fine Tuned Bloomz-7B1 model on CoT dataset from Flan Data Collection (v2) (~64k data) using ***HF Deepspeed***
**Base Model:** bigscience/bloomz-7b1
**Training Details :**
* Epochs: 8
* Batch Size : 5 instantaneous per device x 2 gradient accumulation steps x 8 gpus = 80
* Max Length : 1024
* Weight Decay : 0
* Learning Rate : 5e-5
* Learning Rate Scheduler Type : Linear
* Number of warmup steps : 0
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/CoT
Files :
* cot_fsnoopt.csv
* cot_fsopt.csv
* cot_zsnoopt.csv
* cot_zsopt.csv
**Final Review :**
* The model has just memorized/overfitted on the data and is not working good on the samples outside the training data.
* Also looks like it has changed the base model weights by too much (catastrophic forgetting).
* Similar problems with the Epoch 6 model as well.
* Epoch 2 model couldn't find middle ground and not performing well on training data and not on new data as well and increasing just the Epochs is leading to memorization as stated above.
**Conclusion :**
* Need more quality data for the model to really learn the patterns. Increasing just the epochs with less data only leads to overfitting. |
iamplus/bloomz-7b1-stanford-alpaca-v1 | iamplus | 2023-08-18T07:10:59Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"dataset:iamplus/Instruction_Tuning",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-03-17T15:02:39Z | ---
license: bigscience-openrail-m
datasets:
- iamplus/Instruction_Tuning
---
First Version of Instruction Tuned Bloomz-7B1 model on Stanford Alpaca Instruction Tuning dataset (52k data) using ***HF Deepspeed***
**Base Model:** bigscience/bloomz-7b1
**Training Details :**
* Epochs: 4
* Batch Size : 5 instantaneous per device x 3 gradient accumulation steps x 8 gpus = 120
* Max Length : 1024
* Weight Decay : 0
* Learning Rate : 5e-5
* Learning Rate Scheduler Type : Linear
* Number of warmup steps : 40
* Machine : 8xA100 80GB
**Dataset Details :**
Dataset : iamplus/Instruction_Tuning
Files :
* stanford_alpaca_it.csv |
aratshimyanga/q-taxi-v3 | aratshimyanga | 2023-08-18T07:09:56Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T07:09:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aratshimyanga/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-42 | BenjaminOcampo | 2023-08-18T07:09:37Z | 4 | 0 | transformers | [
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T07:08:45Z | ---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-42
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
F-Haru/WMT_Metrics_da_data | F-Haru | 2023-08-18T07:08:07Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-17T10:56:50Z |
metrics_finetuning_teacher.py で「paraphrase-mpnet-base-v2」をファインチューニングをするコードになっている
metrics_finetuning_student.py で「paraphrase-multilingual-mpnet-base-v2」をファインチューニングをするコードになっている
distillation.py で知識蒸留をするコードになっている |
aratshimyanga/q-FrozenLake-v1-4x4-noSlippery | aratshimyanga | 2023-08-18T07:08:02Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T07:08:00Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aratshimyanga/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
huytx267/function_retrieval | huytx267 | 2023-08-18T07:05:02Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-08-18T06:39:44Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
cecb/nuixmodel | cecb | 2023-08-18T07:04:35Z | 1 | 0 | peft | [
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2023-08-17T21:05:23Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
zhangbo2008/best_llm_train06P55P27p2023 | zhangbo2008 | 2023-08-18T06:55:28Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T06:55:27Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
zhangbo2008/best_llm_train6p54p37p2023 | zhangbo2008 | 2023-08-18T06:54:38Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T06:54:37Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
kimdwan/klue-roberta-finetuned-korquad-LOGAN | kimdwan | 2023-08-18T06:53:40Z | 101 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"license:other",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-08-18T06:28:18Z | ---
license: other
---
허깅페이스 korquad를 추가적으로 파인튜닝한 모델입니다. 코드는 아래와 같습니다. 많은 사랑 부탁드립니다.
```
from transformers import RobertaForQuestionAnswering,AutoTokenizer
import torch
path = 'kimdwan/klue-roberta-finetuned-korquad-LOGAN'
model = RobertaForQuestionAnswering.from_pretrained(path)
tokenizer = AutoTokenizer.from_pretrained(path)
# 문장예시 입니다.
text = """
대한민국(大韓民國, 영어: Republic of Korea, ROK), 약칭 한국(韓國, 영어: Korea), 남한(南韓, 영어: South Korea), 남조선(南朝鮮)은 동아시아의 한반도 남부에 위치한 국가이다.
현정체제는 대한민국 제6공화국이다.
대한민국의 국기는 대한민국 국기법에 따라 태극기[5]이며, 국가는 관습상 애국가, 국화는 관습상 무궁화이다.
공용어는 한국어와 한국 수어이다. 수도는 서울이다. 인구는 2023년을 기준으로 약 5,174만 명으로, 전체 인구 중 절반이(약 2,624만 명) 수도권에 살고 있다.[6]
대한민국은 1948년 5월 10일 총선거를 통해 제헌국회를 구성하였고, 1948년 8월 15일 대한민국 정부를 수립하였다.
1948년 제헌 국회에서 대한민국의 국호를 계승하여 헌법에 명시하였고, 다시 1950년 1월 16일 국무원 고시 제7호 ‘국호 및 일부 지방명과 지도색 사용에 관한 건’에 의해 확정하였다.
"""
# 질문을 적습니다.
question = "대한민국 정부 수립 시기는?"
#이제 부터입니다.
confix = question + "[SEP]" + text
token = tokenizer(confix,return_tensors="pt")
model.eval()
output = model(**token)
_,start = torch.max(output['start_logits'],dim=-1)
_,end = torch.max(output['end_logits'],dim=-1)
last = token["input_ids"][0][start.item():end.item()+1]
tokenizer.decode(last)
``` |
Kadan2023/PlantsMix | Kadan2023 | 2023-08-18T06:51:16Z | 0 | 2 | null | [
"stable-diffusion",
"text-to-image",
"ja",
"en",
"license:other",
"region:us"
] | text-to-image | 2023-08-17T11:39:05Z | ---
license: other
language:
- ja
- en
tags:
- stable-diffusion
- text-to-image
---
<div class="flex justify-center">
<div class="container p-0 w-100">
<img class="mt-0 object-cover rounded-t-lg w-100"
style="height: 320px;"
src="https://huggingface.co/Kadan2023/PlantsMix/resolve/main/images/header.png"
width="100%"/>
<div class="flex px-4">
<div class="flex-auto">
<h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(77 124 147/var(--tw-text-opacity));">
PlantsMix
<a href="https://huggingface.co/Kadan2023/PlantsMix/resolve/main/README.md" class="ml-2 inline-block">
<svg xmlns="http://www.w3.org/2000/svg" class="h-5 w-5" fill="none" viewBox="0 0 24 24" strokeWidth={1.5} stroke="currentColor" className="w-6 h-6">
<path strokeLinecap="round" strokeLinejoin="round" d="M3.75 3.75v4.5m0-4.5h4.5m-4.5 0L9 9M3.75 20.25v-4.5m0 4.5h4.5m-4.5 0L9 15M20.25 3.75h-4.5m4.5 0v4.5m0-4.5L15 9m5.25 11.25h-4.5m4.5 0v-4.5m0 4.5L15 15" />
</svg>
</a>
</h1>
<p class="mb-4 text-base text-neutral-600 dark:text-neutral-200">
注意:SD1.5、レシピ非公開プライベートモデルです。使用方法、リスクについてはご自身でお調べになり、ご理解の上使用ください。 <br>
NOTE: This is an SD1.5 recipe private model. Please research and understand the usage and risks yourself before making a decision to use this product.
</p>
</div>
</div>
</div>
</div>
<hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));">
<h3 id="PlantsMix" class="mt-0 text-2xl">
<code>PlantsMix</code> <small>(<code>@75a24d317f</code>)</small>
</h3>
<div>
<h4>ライセンス / License</h4>
<div class="px-2">
<table class="table-fixed border mt-0 text-xs">
<tbody>
<tr>
<td class="px-4 text-base" colspan="2">
<a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">
修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license
</a>
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルのクレジットを入れずに使用する<br>
Use the model without crediting the creator
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルで生成した画像を商用利用する<br>
Sell images they generate
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルを商用の画像生成サービスで利用する</br>
Run on services that generate images for money
</td>
</tr>
<tr>
<td class="align-middle px-2 w-8">
<span class="text-green-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" />
</svg>
</span>
</td>
<td>
このモデルを使用したマージモデルを共有する<br>
Share merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデル、またはこのモデルをマージしたモデルを販売する</br>
Sell this model or merges using this model
</td>
</tr>
<tr class="bg-danger-100">
<td class="align-middle px-2 w-8">
<span class="text-red-500">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6">
<path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" />
</svg>
</span>
</td>
<td>
このモデルをマージしたモデルに異なる権限を設定する</br>
Have different permissions when sharing merges
</td>
</tr>
</tbody>
</table>
</div>
<h4>推奨設定 / Recommended Settings</h4>
<div class="px-2">
<table class="table-auto border mt-0 text-sm">
<tbody>
<tr>
<td class="pl-2" style="width: 12rem;">
VAE
</td>
<td>
<a href="https://civitai.com/models/22354/clearvae">ClearVAE</a>
</td>
</tr>
<tr>
<td class="pl-2">
Negative Embedding
</td>
<td>
<a href="https://huggingface.co/gsdf/Counterfeit-V3.0/tree/main/embedding">EasyNegativeV2</a><br>
<a href="https://huggingface.co/yesyeahvh/bad-hands-5/tree/main">badhandv4</a> ※badhandv4はなくても良い、ある方が崩壊するかも
</td>
</tr>
</tbody>
</table>
</div>
<h4>例 / Examples</h4>
<div class="container mx-auto px-2">
<div class="flex flex-wrap min-w-min items-baseline">
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="flex-1">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/Kadan2023/PlantsMix/resolve/main/images/1.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Ultra detailed, best quality, official art, masterpiece, illustration,Detailed face and long eyelashes, ray tracing, depth of field. BREAK
13 year old (boy) and a girl. They are standing next to each other. Victorian style royalcore twin-corded fashion. They are very similar but with different elements. BREAK
The season is summer, Cute and dreamy backgrounds are the beauty they deserve. art by Representing Artists.
Negative prompt: nsfw, blush, EasyNegativeV2, signature, watermark, username, blurry, artist name.
Steps: 25
Sampler: DDIM
CFG scale: 7
Seed: 3126641670
Clip Skip: 1
</pre>
</div>
</div>
<div class="p-1 flex-1" style="width: 50%; min-width: 320px;">
<div class="w-full">
<img
alt="gallery"
class="block h-full w-full rounded-t-lg object-contain object-center"
src="https://huggingface.co/Kadan2023/PlantsMix/resolve/main/images/2.png"
loading="lazy"
/>
</div>
<div class="w-full">
<pre class="w-full" style="white-space: pre-line;">
Ultra detailed, best quality, official art, masterpiece, illustration, Detailed face and long eyelashes, solo, cowboy shot, ray tracing, depth of field. BREAK.girl in magical fantasy Night. cute magicalcore fashion. The season is summer. Stars and light effects, Cute and dreamy backgrounds are the beauty deserve. art by Representing Artists.
Negative prompt: EasyNegativeV2, badhandv4, nsfw, blush, signature, watermark, username, blurry, artist name.
Steps: 25
Sampler: DDIM
CFG scale: 7
Seed: 170819351
Clip Skip: 1
</pre>
</div>
</div>
</div>
</div>
</div>
|
camus-ng/lora-trained-xl-cory-10 | camus-ng | 2023-08-18T06:34:51Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-16T10:22:22Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of <ntvc> man
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/lora-trained-xl-cory-10
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
xianbin/a2c-PandaReachDense-v2 | xianbin | 2023-08-18T06:31:12Z | 10 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-28T03:03:48Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.58 +/- 0.78
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
PM-AI/sts_paraphrase_xlm-roberta-base_de-en | PM-AI | 2023-08-18T06:30:56Z | 9 | 4 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"semantic textual similarity",
"sts",
"semantic search",
"sentence similarity",
"paraphrasing",
"sentence-transformer",
"sentence-similarity",
"de",
"en",
"arxiv:2004.09813",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-01-09T12:35:09Z | ---
language:
- de
- en
pipeline_tag: sentence-similarity
tags:
- semantic textual similarity
- sts
- semantic search
- sentence similarity
- paraphrasing
- sentence-transformer
- feature-extraction
- transformers
license: mit
---
# Model card for PM-AI/sts_paraphrase_xlm-roberta-base_de-en
## Model summary
Transformer model for **Semantic Textual Similarity (STS)** for _German_ and _Englisch_ sentences/texts.
The embeddings output can be used for **semantic search**, **paraphrasing** and **retrieval** with _cosine similarity_.
The Model is applicable to _Englisch-German-Mixed_ sentences/texts but also for _Englisch only_ and _German only_.
The model can be easily used with [Sentence Transformer](https://github.com/UKPLab/sentence-transformers) library.
## Training
This model is based on a training approach from 2020 by Philip May, who published the [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model.
We updated this approach by a new base model for fine-tuning and some extensions to the training data.
These changes are discussed in the next sections.
### Training Data
The model is based on training with samples from [STSb](https://huggingface.co/datasets/stsb_multi_mt), [SICK](https://huggingface.co/datasets/mteb/sickr-sts) and [Priya22 semantic textual relatedness](https://github.com/Priya22/semantic-textual-relatedness) datasets.
They contain about 76.000 sentence pairs in total.
These sentence pairs are based on _German-German_, _English-English_ and _German-English mixed_.
The training object is to optimize for _cosine similarity loss_ based on a human annoted sentence similarity score.
In terms of content, the samples are based on rather simple sentences.
When the TSystems model was published, only the STSb dataset was used for STS training.
Therefore it is included in our model, but expanded to include SICK and Priya22 semantic textual relatedness:
- SICK was partly used in STSb, but our custom translation using [DeepL](https://www.deepl.com/) leads to slightly different phrases. This approach allows more examples to be included in the training.
- The Priya22 semantic textual relatedness dataset published in 2022 was also translated into German via DeepL and added to the training data. Since it does not have a train-test-split, it was created independently at a ratio of 80:20.
All training and test data (STSb, Sick, Priya22) were checked for duplicates within and with each other and removed if found.
Because the test data is prioritized, duplicated entries between test-train are exclusively removed from train split.
The final used datasets can be viewed here: [datasets_sts_paraphrase_xlm-roberta-base_de-en](https://gitlab.com/sense.ai.tion-public/datasets_sts_paraphrase_xlm-roberta-base_de-en)
### Training
Befor fine-tuning for STS we made the English paraphrasing model [paraphrase-distilroberta-base-v1](https://huggingface.co/sentence-transformers/paraphrase-distilroberta-base-v1) usable for German by applying **[Knowledge Distillation](https://arxiv.org/abs/2004.09813)** (_Teacher-Student_ approach).
The TSystems model used version 1, which is based on 7 different datasets and contains around 24.6 million samples.
We are using version 2 with 12 datasets and about 83.3 million examples.
Details for this process here: [PM-AI/paraphrase-distilroberta-base-v2_de-en](https://huggingface.co/PM-AI/paraphrase-distilroberta-base-v2_de-en)
For fine-tuning we are using SBERT's [training_stsbenchmark_continue_training.py](https://github.com/UKPLab/sentence-transformers/blob/b86eec31cf0a102ad786ba1ff31bfeb4998d3ca5/examples/training/sts/training_stsbenchmark_continue_training.py) training script.
One thing has been changed in this training script: when a sentence pair consists of identical utterances the score is set to 5.0 (maximum).
It makes no sense to say identical sentences have a score of 4.8 or 4.9.
#### Parameterization of training
- **Script:** [training_stsbenchmark_continue_training.py](https://github.com/UKPLab/sentence-transformers/blob/b86eec31cf0a102ad786ba1ff31bfeb4998d3ca5/examples/training/sts/training_stsbenchmark_continue_training.py)
- **Datasets:** [datasets_sts_paraphrase_xlm-roberta-base_de-en/all_cross_train_unique.csv](https://gitlab.com/sense.ai.tion-public/datasets_sts_paraphrase_xlm-roberta-base_de-en)
- **GPU:** NVIDIA A40 (Driver Version: 515.48.07; CUDA Version: 11.7)
- **Batch Size:** 32
- **Base Model:** [PM-AI/paraphrase-distilroberta-base-v2_de-en](PM-AI/paraphrase-distilroberta-base-v2_de-en)
- **Loss Function:** Cosine Similarity
- **Learning Rate:** 2e-5
- **Epochs:** 3
- **Evaluation Samples:** 500
- **Evaluation Steps:** 1000
- **Warmup Steps:** 10%
### Evaluation <a name="evaluation"></a>
Now the performance is measured cross-lingually as well as for German and English only.
In addition, the test samples used are evaluated individually for each data set (STSb, SICK, Priya22), as well as in a large combined test data set (all).
This subdivision per data set allows for a fair overall assessment, since external models are not built on the same data basis as the model presented here.
The data is not evenly distributed in either training or testing!
**❗Some models are only usable for one language (because they are monolingual). They will almost not perform at all in the other two tables. Still, they are good models in certain applications❗**
The first table shows the evaluation results for **cross-lingual (German-English-Mixed)** based on _Spearman_:
**model**|**STSb**|**SICK**|**Priya22**|**all**|
:-----:|:-----:|:-----:|:-----:|:-----:
[PM-AI/sts_paraphrase_xlm-roberta-base_de-en (ours)](https://huggingface.co/PM-AI/sts_paraphrase_xlm-roberta-base_de-en) | 0.8672 <br /> 🏆 | 0.8639 <br /> 🏆 | 0.8354 <br /> 🏆 | 0.8711 <br /> 🏆
[T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) | 0.8525 | 0.7642 | 0.7998 | 0.8216
[PM-AI/paraphrase-distilroberta-base-v2_de-en (ours, no fine-tuning)](PM-AI/paraphrase-distilroberta-base-v2_de-en) | 0.8225 | 0.7579 | 0.8255 | 0.8109
[sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8310 | 0.7529 | 0.8184 | 0.8102
[sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual) | 0.8194 | 0.7703 | 0.7566 | 0.7998
[sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1) | 0.7985 | 0.7217 | 0.7975 | 0.7838
[sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1](https://huggingface.co/sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1) | 0.7985 | 0.7217 | 0.7975 | 0.7838
[sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 0.7823 | 0.7090 | 0.7830 | 0.7834
[sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) | 0.7449 | 0.6941 | 0.7607 | 0.7534
[sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) | 0.7517 | 0.6950 | 0.7619 | 0.7496
[sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking](https://huggingface.co/sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking) | 0.7211 | 0.6650 | 0.7382 | 0.7200
[Sahajtomar/German-semantic](https://huggingface.co/Sahajtomar/German-semantic) | 0.7170 | 0.5871 | 0.7204 | 0.6802
[symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli) | 0.6488 | 0.5489 | 0.6688 | 0.6303
[sentence-transformers/sentence-t5-large](https://huggingface.co/sentence-transformers/sentence-t5-large) | 0.6849 | 0.6063 | 0.7360 | 0.5843
[sentence-transformers/sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.6013 | 0.5213 | 0.6671 | 0.5068
[sentence-transformers/gtr-t5-large](https://huggingface.co/sentence-transformers/gtr-t5-large) | 0.5881 | 0.5168 | 0.6674 | 0.4984
[deepset/gbert-large-sts](https://huggingface.co/deepset/gbert-large-sts) | 0.3842 | 0.3537 | 0.4105 | 0.4362
[sentence-transformers/gtr-t5-base](https://huggingface.co/sentence-transformers/gtr-t5-base) | 0.5204 | 0.4346 | 0.6008 | 0.4276
[textattack/bert-base-uncased-STS-B](https://huggingface.co/textattack/bert-base-uncased-STS-B) | 0.0669 | 0.1135 | 0.0105 | 0.1514
[symanto/xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/xlm-roberta-base-snli-mnli-anli-xnli) | 0.1694 | 0.0440 | 0.0521 | 0.1156
The second table shows the evaluation results for **German only** based on _Spearman_:
**model**|**STSb**|**SICK**|**Priya22**|**all**|
:-----:|:-----:|:-----:|:-----:|:-----:
[PM-AI/sts_paraphrase_xlm-roberta-base_de-en (ours)](https://huggingface.co/PM-AI/sts_paraphrase_xlm-roberta-base_de-en) | 0.8658 <br /> 🏆 | 0.8775 <br /> 🏆 | 0.8432 <br /> 🏆 | 0.8747 <br /> 🏆
[T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) | 0.8547 | 0.8047 | 0.8068 | 0.8327
[Sahajtomar/German-semantic](https://huggingface.co/Sahajtomar/German-semantic) | 0.8485 | 0.7915 | 0.8139 | 0.8280
[sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8360 | 0.7941 | 0.8237 | 0.8178
[PM-AI/paraphrase-distilroberta-base-v2_de-en (ours, no fine-tuning)](PM-AI/paraphrase-distilroberta-base-v2_de-en) | 0.8297 | 0.7930 | 0.8341 | 0.8170
[sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual) | 0.8190 | 0.8027 | 0.7674 | 0.8072
[sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1) | 0.8079 | 0.7844 | 0.8126 | 0.8034
[sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1](https://huggingface.co/sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1) | 0.8079 | 0.7844 | 0.8126 | 0.8034
[sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 0.7891 | 0.7830 | 0.8010 | 0.7981
[sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) | 0.7705 | 0.7612 | 0.7899 | 0.7780
[sentence-transformers/sentence-t5-large](https://huggingface.co/sentence-transformers/sentence-t5-large) | 0.7771 | 0.7724 | 0.7829 | 0.7727
[sentence-transformers/sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.7361 | 0.7613 | 0.7643 | 0.7602
[sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) | 0.7467 | 0.7494 | 0.7684 | 0.7584
[sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking](https://huggingface.co/sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking) | 0.7419 | 0.7420 | 0.7692 | 0.7566
[sentence-transformers/gtr-t5-large](https://huggingface.co/sentence-transformers/gtr-t5-large) | 0.7252 | 0.7201 | 0.7613 | 0.7447
[sentence-transformers/gtr-t5-base](https://huggingface.co/sentence-transformers/gtr-t5-base) | 0.7058 | 0.6943 | 0.7462 | 0.7271
[symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli) | 0.7284 | 0.7136 | 0.7109 | 0.6997
[deepset/gbert-large-sts](https://huggingface.co/deepset/gbert-large-sts) | 0.6576 | 0.7141 | 0.6769 | 0.6959
[textattack/bert-base-uncased-STS-B](https://huggingface.co/textattack/bert-base-uncased-STS-B) | 0.4427 | 0.6023 | 0.4380 | 0.5380
[symanto/xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/xlm-roberta-base-snli-mnli-anli-xnli) | 0.4154 | 0.5048 | 0.3478 | 0.4540
And last but not least our third table which shows the evaluation results for **English only** based on _Spearman_:
**model**|**STSb**|**SICK**|**Priya22**|**all**|
:-----:|:-----:|:-----:|:-----:|:-----:
[PM-AI/sts_paraphrase_xlm-roberta-base_de-en (ours)](https://huggingface.co/PM-AI/sts_paraphrase_xlm-roberta-base_de-en) | 0.8768 <br /> 🏆 | 0.8705 <br /> 🏆 | 0.8402 | 0.8748 <br /> 🏆
[sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 0.8682 | 0.8065 | 0.8430 | 0.8378
[PM-AI/paraphrase-distilroberta-base-v2_de-en (ours, no fine-tuning)](PM-AI/paraphrase-distilroberta-base-v2_de-en) | 0.8597 | 0.8105 | 0.8399 | 0.8363
[T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) | 0.8660 | 0.7897 | 0.8097 | 0.8308
[sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) | 0.8441 | 0.8059 | 0.8175 | 0.8300
[sentence-transformers/sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.8551 | 0.8063 | 0.8434 | 0.8235
[sentence-transformers/sentence-t5-large](https://huggingface.co/sentence-transformers/sentence-t5-large) | 0.8536 | 0.8097 | 0.8475 <br /> 🏆 | 0.8191
[sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual) | 0.8503 | 0.8009 | 0.7675 | 0.8162
[sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1) | 0.8350 | 0.7645 | 0.8211 | 0.8050
[sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1](https://huggingface.co/sentence-transformers/xlm-r-distilroberta-base-paraphrase-v1) | 0.8350 | 0.7645 | 0.8211 | 0.8050
[sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) | 0.8075 | 0.7534 | 0.7908 | 0.7828
[sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) | 0.8061 | 0.7421 | 0.7923 | 0.7784
[Sahajtomar/German-semantic](https://huggingface.co/Sahajtomar/German-semantic) | 0.8061 | 0.7098 | 0.7709 | 0.7712
[sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking](https://huggingface.co/sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking) | 0.7866 | 0.7477 | 0.7700 | 0.7691
[sentence-transformers/gtr-t5-large](https://huggingface.co/sentence-transformers/gtr-t5-large) | 0.7763 | 0.7258 | 0.8124 | 0.7675
[sentence-transformers/gtr-t5-base](https://huggingface.co/sentence-transformers/gtr-t5-base) | 0.7961 | 0.7129 | 0.8147 | 0.7669
[symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli) | 0.7799 | 0.7415 | 0.7335 | 0.7376
[deepset/gbert-large-sts](https://huggingface.co/deepset/gbert-large-sts) | 0.5703 | 0.6011 | 0.5673 | 0.6060
[textattack/bert-base-uncased-STS-B](https://huggingface.co/textattack/bert-base-uncased-STS-B) | 0.4978 | 0.6099 | 0.5505 | 0.5754
[symanto/xlm-roberta-base-snli-mnli-anli-xnli](https://huggingface.co/symanto/xlm-roberta-base-snli-mnli-anli-xnli) | 0.3830 | 0.5180 | 0.3056 | 0.4414
**❗It is crucial to understand that:**
- Only our model has seen training data from STSb, SICK and Priya22 combined, which is one reason for better results. The model has simply been trained to be more sensitive to these type of samples.
- The datasets are not proportionally aligned in terms of their number of examples. For example, Priya22 is significantly underrepresented.
- The compared models are of different sizes, which affects resource consumption (CPU, RAM) and inference speed (benchmark). So-called "large" models usually perform better, but also cost more (resources, monetary value) than e.g. "base" models.
- Multilingual models are usually made multilingual by Knowledge Distillation, starting from a monolingual state. Therefore, they usually perform somewhat better in the original language.
### Acknowledgment
This work is a collaboration between [Technical University of Applied Sciences Wildau (TH Wildau)](https://en.th-wildau.de/) and [sense.ai.tion GmbH](https://senseaition.com/).
You can contact us via:
* [Philipp Müller (M.Eng.)](https://www.linkedin.com/in/herrphilipps); Author
* [Prof. Dr. Janett Mohnke](mailto:[email protected]); TH Wildau
* [Dr. Matthias Boldt, Jörg Oehmichen](mailto:[email protected]); sense.AI.tion GmbH
This work was funded by the European Regional Development Fund (EFRE) and the State of Brandenburg. Project/Vorhaben: "ProFIT: Natürlichsprachliche Dialogassistenten in der Pflege".
<div style="display:flex">
<div style="padding-left:20px;">
<a href="https://efre.brandenburg.de/efre/de/"><img src="https://huggingface.co/datasets/PM-AI/germandpr-beir/resolve/main/res/EFRE-Logo_rechts_oweb_en_rgb.jpeg" alt="Logo of European Regional Development Fund (EFRE)" width="200"/></a>
</div>
<div style="padding-left:20px;">
<a href="https://www.senseaition.com"><img src="https://senseaition.com/wp-content/uploads/thegem-logos/logo_c847aaa8f42141c4055d4a8665eb208d_3x.png" alt="Logo of senseaition GmbH" width="200"/></a>
</div>
<div style="padding-left:20px;">
<a href="https://www.th-wildau.de"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/TH_Wildau_Logo.png/640px-TH_Wildau_Logo.png" alt="Logo of TH Wildau" width="180"/></a>
</div>
</div> |
Joe-Joe/QuandaleDingle | Joe-Joe | 2023-08-18T06:10:20Z | 0 | 0 | null | [
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-08-18T06:01:35Z | ---
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vishwanath2003/mymlappp | vishwanath2003 | 2023-08-18T05:48:17Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-18T05:47:11Z | ---
license: bigscience-openrail-m
---
|
LarryAIDraw/noa | LarryAIDraw | 2023-08-18T05:28:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-18T20:08:31Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/106525/ushio-noa-blue-archive-or-goofy-ai |
warshakhan/donut-base-ISynHMP | warshakhan | 2023-08-18T05:26:59Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-08-16T10:36:36Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-ISynHMP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-ISynHMP
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
Fine_tuned Donut_Model for Document Parsing task on ISynGen_HMP Dataset
## Intended uses & limitations
More information needed
## Training and evaluation data
ISynGen_HMP - private Handwritten Medical Prescription dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
JabrilJacobs/ppo-LunarLander-v2 | JabrilJacobs | 2023-08-18T05:26:07Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2022-12-11T06:58:46Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -203.44 +/- 98.41
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'JabrilJacobs/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
rohn132/dqn-SpaceInvadersNoFrameskip-v4 | rohn132 | 2023-08-18T05:15:09Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T05:14:33Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 626.00 +/- 178.90
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rohn132 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rohn132 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rohn132
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
heegyu/polyglot-ko-5.8b-chat | heegyu | 2023-08-18T05:08:42Z | 2,282 | 3 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-18T00:24:41Z | ---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# heegyu/polyglot-ko-5.8b-chat
- [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b)를 여러 한국어 instruction 데이터셋으로 학습한 모델
## 사용한 데이터셋
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
## 생성 Prompt
- EOS token(<|endoftext|>)이 나올 때까지 생성하면 됩니다.
- 최상단 프롬프트는 있는게 좋은 답변이 더 자주 나오는 것 같아요.
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
서울 관광지 추천해줘요
### 챗봇:
```
생성된 답변:
```
서울 관광지를 추천하겠습니다.
경복궁 (경복궁 또는 경복궁)
경복궁은 한국 전통 궁궐로서 조선 시대의 역사와 건축 철학을 체험할 수 있는 공간입니다. 궁을 방문하면 조선시대 생활과 통치방식에 대해 배울 수 있으며, 동시에 전통적인 궁궐 문화를 체험할 수 있습니다.
창덕궁 (창덕궁)
창덕궁은 조선시대에 왕실의 정원으로 유명한 궁입니다. 한국의 르네상스 시대를 보여주는 아름다움을 경험할 수 있으며, 전통적인 건축양식을 경험할 수 있는 기회가 주어집니다.
덕수궁 (덕수궁)
덕수궁은 1896년에 설립된 왕이 사는 궁궐입니다. 당시의 아름다움과 건축 철학을 동시에 느낄 수 있는 궁궐입니다. 한국의 궁궐 문화, 역사 및 자연 환경에 대한 학습을 원하는 사람에게 추천됩니다.
종묘 (종묘)
종묘는 조선시대의 왕실 정원으로서 궁궐과 함께 왕의 거처 및 제사를 지내는 곳입니다. 전통적인 의례, 역사 및 건축 철학을 느끼고 체험할 수 있는 곳입니다.
한옥마을 (한옥마을)
한옥마을은 서울 중심부에 위치한 유명한 관광지로, 한국 전통 문화 체험을 원하는 사람과 외국인에게도 인기가 많습니다. 한옥 건물과 전통 정원을 경험할 수 있으며, 도시 속의 전통 문화를 경험할 수 있는 기회를 제공합니다.
인사동 (인사동)
인사동은 한국의 전통적인 문화와 예술, 음식 및 쇼핑을 체험할 수 있는 곳입니다. 한국의 전통문화를 체험하고, 전통 문화의 아름다움을 느낄 수 있는 장소입니다.
위의 관광지를 방문함으로써, 우리는 한국의 전통적인 문화와 역사를 경험하고 배울 수 있습니다. 또한, 아름다움과 건축 철학에 대한 이해도를 높일 수 있으며, 우리 지역의 문화, 전통, 역사를 더욱 이해하고 사랑하게 될 수 있습니다.
```
멀티턴 대화 예시
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
저녁 메뉴 추천해주세요
### 챗봇:
한식을 선호하세요?
### 사용자:
네네
### 챗봇:
한식 중에서는 찌게류를 추천드리겠어요. 된장찌개, 김치찌개, 부대찌개와 같은 전통적인 찌게 메뉴를 선택해보세요. 더 좋은 선택을 위해 다양한 종류의 찌게를 시도해보세요. 만약에 다른 음식을 먹고 싶다면 그것도 좋아요!
``` |
BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-3 | BenjaminOcampo | 2023-08-18T04:57:10Z | 3 | 0 | transformers | [
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T04:56:22Z | ---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-3
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zijuncheng/results | zijuncheng | 2023-08-18T04:50:36Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T04:50:11Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9133333333333333
- name: F1
type: f1
value: 0.9161290322580645
- name: Precision
type: precision
value: 0.8875
- name: Recall
type: recall
value: 0.9466666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2250
- Accuracy: 0.9133
- F1: 0.9161
- Precision: 0.8875
- Recall: 0.9467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6922 | 0.98 | 46 | 0.6867 | 0.7433 | 0.6778 | 0.9101 | 0.54 |
| 0.2634 | 1.98 | 93 | 0.3428 | 0.8833 | 0.8736 | 0.9528 | 0.8067 |
| 0.1736 | 2.94 | 138 | 0.2250 | 0.9133 | 0.9161 | 0.8875 | 0.9467 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-2 | BenjaminOcampo | 2023-08-18T04:43:27Z | 3 | 0 | transformers | [
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T04:42:38Z | ---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-2
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yzzhong/pixelcopter_policy | yzzhong | 2023-08-18T04:31:58Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-17T05:58:50Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter_policy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 12.00 +/- 9.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Yntec/samdoesartsUlt | Yntec | 2023-08-18T04:07:41Z | 334 | 4 | diffusers | [
"diffusers",
"safetensors",
"art",
"anime",
"style",
"checkpoint",
"jinofcoolnes",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-07T12:27:42Z | ---
license: creativeml-openrail-m
language:
- en
pipeline_tag: text-to-image
tags:
- art
- anime
- style
- checkpoint
- jinofcoolnes
---
This model with the MoistMix VAE baked.
Previews and prompts:

(lora)0.5 , (amakawa hano)0.5 , 1 girl, ray tracing, {best quality}, {{masterpiece}}, {highres}, original, extremely detailed 8K wallpaper, {an extremely delicate and beautiful}, , incredibly_absurdres, colorful, intricate detail, artbook

pretty cute little girl in tricycle, Screenshot of an surreal streetwear 70s round minimalist architecture, Sharp, 35mm still from a sci fi light blockbuster color movie made in 2022, beautiful portrait, set in 1860, in front of a spaceship that has just landed on an alien planet, are all wearing, a robot stands nearby
Original pages:
https://huggingface.co/jinofcoolnes/sammod
https://civitai.com/api/download/models/14459?type=VAE |
Harsha100/disneyPixarCartoonTypeA_10 | Harsha100 | 2023-08-18T03:57:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-18T03:45:33Z | ---
license: creativeml-openrail-m
---
|
sartmis1/starcoder-wikisql | sartmis1 | 2023-08-18T03:55:38Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-17T14:39:23Z | Starcoder model trained on wikisql dataset. |
aphi/ownppo-LunarLander-v2 | aphi | 2023-08-18T03:21:10Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-17T17:31:52Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -25.85 +/- 39.85
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'aphi_experiment'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.0001
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.1
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'aphi/ownppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
cecb/super_awesome_llama2 | cecb | 2023-08-18T03:18:17Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-11T19:08:48Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
asenella/incomplete_mhd_MMVAEPlus_beta_5_scale_True_seed_1 | asenella | 2023-08-18T03:04:48Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-13T23:22:54Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Spico/Humback-M0 | Spico | 2023-08-18T02:51:51Z | 9 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:OpenAssistant/oasst1",
"arxiv:2308.06259",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-16T05:58:07Z | ---
license: apache-2.0
datasets:
- OpenAssistant/oasst1
language:
- en
---
## 🐋 Humback
The proposed Humback is a novel framework that can augment the instruction data for supervised fine-tuning with high quality.
This is a SFT (supervised fine-tuning) model $M_{0}$ for [Humback](https://arxiv.org/pdf/2308.06259.pdf) reproduction.
This model is trained on the seed data.
The seed data is a sampled dataset from [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
You may find more details and usage examples in [Spico197/Humback](https://github.com/Spico197/Humback) .
## 📜 Reference
```bibtex
@misc{li2023selfalignment,
title={Self-Alignment with Instruction Backtranslation},
author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
year={2023},
eprint={2308.06259},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
gonglinyuan/metro_t0p_base | gonglinyuan | 2023-08-18T02:41:17Z | 133 | 0 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-19T19:40:01Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0p_base
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 64.90974729241879
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 44.642857142857146
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 32.35333333333333
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 32.199999999999996
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 32.9
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 61.34615384615385
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 50.860299921073405
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 61.5
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 82.59754142169962
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 43.22097191794464
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 51.20689655172414
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0+-Base
To use METRO-T0+-Base in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0p_base", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0p_base", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
gonglinyuan/metro_t0p_basepp | gonglinyuan | 2023-08-18T02:41:04Z | 138 | 0 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-19T20:39:10Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0p_basepp
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 71.44404332129963
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 60.714285714285715
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 36.906666666666666
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 35.24
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 36.46666666666666
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 62.21153846153847
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 54.08050513022889
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 78.875
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 90.29396044895778
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 67.56871141206932
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 51.5987460815047
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0+-Base++
To use METRO-T0+-Base++ in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0p_basepp", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0p_basepp", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
gonglinyuan/metro_t0pp_base | gonglinyuan | 2023-08-18T02:40:48Z | 136 | 0 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-19T20:51:14Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0pp_base
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 75.41516245487364
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 46.904761904761905
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 34.233333333333334
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 33.906666666666666
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 35.71111111111111
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 55.0
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 51.22336227308604
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 69.5
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 84.17958311063602
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 43.432583150766774
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 65.12539184952979
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0++-Base
To use METRO-T0++-Base in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0pp_base", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0pp_base", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
gonglinyuan/metro_t0pp_basepp | gonglinyuan | 2023-08-18T02:40:35Z | 139 | 1 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-19T21:26:47Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0pp_basepp
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 77.79783393501806
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 69.52380952380955
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 39.693333333333335
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 36.61333333333334
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 40.08333333333334
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 61.44230769230769
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 54.55406471981057
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 83.875
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 90.88188134687333
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 68.5421230830512
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 67.58620689655174
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0++-Base++
To use METRO-T0++-Base++ in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0pp_basepp", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0pp_basepp", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
gonglinyuan/metro_t0_largepp | gonglinyuan | 2023-08-18T02:40:21Z | 138 | 0 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-19T22:05:11Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0_largepp
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 76.7509025270758
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 65.47619047619047
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 41.486666666666665
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 36.29333333333333
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 40.18333333333333
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 60.57692307692308
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 54.50670876085242
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 88.0
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 94.06734366648851
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 29.30691097390958
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 50.9717868338558
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0-Large++
To use METRO-T0-Large++ in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0_largepp", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0_largepp", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
gonglinyuan/metro_t0p_largepp | gonglinyuan | 2023-08-18T02:39:37Z | 139 | 0 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-19T23:29:24Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0p_largepp
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 81.26353790613719
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 70.0
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 45.059999999999995
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 38.593333333333334
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 42.35
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 60.67307692307692
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 57.521704814522494
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 90.5
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 95.41421699625869
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 83.81796454889465
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 52.31974921630094
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0+-Large++
To use METRO-T0+-Large++ in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0p_largepp", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0p_largepp", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
gonglinyuan/metro_t0pp_largepp | gonglinyuan | 2023-08-18T02:32:53Z | 135 | 3 | transformers | [
"transformers",
"pytorch",
"fairseq_t5",
"text2text-generation",
"t5",
"custom_code",
"en",
"arxiv:2305.12567",
"arxiv:2110.08207",
"license:mit",
"model-index",
"autotrain_compatible",
"region:us"
] | text2text-generation | 2023-05-20T04:20:21Z | ---
license: mit
language:
- en
tags:
- t5
model-index:
- name: metro_t0pp_largepp
results:
- task:
type: natural-language-inference
dataset:
type: super_glue
name: RTE
config: rte
split: validation
metrics:
- type: accuracy
value: 83.68231046931406
- task:
type: natural-language-inference
dataset:
type: super_glue
name: CB
config: cb
split: validation
metrics:
- type: accuracy
value: 74.8809523809524
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R1
split: dev_r1
metrics:
- type: accuracy
value: 46.84
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R2
split: dev_r2
metrics:
- type: accuracy
value: 40.373333333333335
- task:
type: natural-language-inference
dataset:
type: anli
name: ANLI R3
split: dev_r3
metrics:
- type: accuracy
value: 44.949999999999996
- task:
type: coreference-resolution
dataset:
type: super_glue
name: WSC
config: wsc.fixed
split: validation
metrics:
- type: accuracy
value: 71.82692307692307
- task:
type: coreference-resolution
dataset:
type: winogrande
name: Winogrande XL
config: winogrande_xl
split: validation
metrics:
- type: accuracy
value: 62.74664561957379
- task:
type: multiple-choice-qa
dataset:
type: super_glue
name: COPA
config: copa
split: validation
metrics:
- type: accuracy
value: 92.625
- task:
type: multiple-choice-qa
dataset:
type: story_cloze
name: StoryCloze 2016
config: '2016'
split: validation
metrics:
- type: accuracy
value: 95.64938535542491
- task:
type: multiple-choice-qa
dataset:
type: hellaswag
name: HellaSwag
split: validation
metrics:
- type: accuracy
value: 83.74327823142801
- task:
type: word-sense-disambiguation
dataset:
type: super_glue
name: WiC
config: wic
split: validation
metrics:
- type: accuracy
value: 70.4858934169279
---
Official repository: https://github.com/gonglinyuan/metro_t0
# METRO-T0
Paper: [Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers](https://arxiv.org/abs/2305.12567) (ACL 2023)
METRO-T0 is a T5-style text-to-text Transformer pretrained using model-generated pretraining signals, prompt-finetuned on a family of public NLP tasks proposed in [T0](https://arxiv.org/abs/2110.08207).
METRO-T0 is highly parameter efficient. For example, METRO-T0-Large++ (775M parameters) outperforms GPT-3 (175B parameters) and T0-3B (3B parameters) on a wide range of NLP tasks.


## Use METRO-T0++-Large++
To use METRO-T0++-Large++ in PyTorch (Python 3.7+, PyTorch 1.12+ and transformers 4.17+ are prerequisites), refer to the code snippet below:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("gonglinyuan/metro_t0pp_largepp", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("gonglinyuan/metro_t0pp_largepp", trust_remote_code=True)
input_text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer([input_text], max_length=512, truncation=True, add_special_tokens=True, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=256, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # expected: positive
```
## Other METRO-T0 Models
| | # Parameters | Pretraining Data | Prompt-Finetuning Data |
|--------------------|--------------|------------------|------------------------|
| [METRO-T0-Base](https://huggingface.co/gonglinyuan/metro_t0_base) | 226M | Wikibook (16G) | T0 Train |
| [METRO-T0+-Base](https://huggingface.co/gonglinyuan/metro_t0p_base) | 226M | Wikibook (16G) | T0+ Train |
| [METRO-T0++-Base](https://huggingface.co/gonglinyuan/metro_t0pp_base) | 226M | Wikibook (16G) | T0++ Train |
| [METRO-T0-Base++](https://huggingface.co/gonglinyuan/metro_t0_basepp) | 256M | 160G corpus | T0 Train |
| [METRO-T0+-Base++](https://huggingface.co/gonglinyuan/metro_t0p_basepp) | 256M | 160G corpus | T0+ Train |
| [METRO-T0++-Base++](https://huggingface.co/gonglinyuan/metro_t0pp_basepp) | 256M | 160G corpus | T0++ Train |
| [METRO-T0-Large++](https://huggingface.co/gonglinyuan/metro_t0_largepp) | 775M | 160G corpus | T0 Train |
| [METRO-T0+-Large++](https://huggingface.co/gonglinyuan/metro_t0p_largepp) | 775M | 160G corpus | T0+ Train |
| [METRO-T0++-Large++](https://huggingface.co/gonglinyuan/metro_t0pp_largepp) | 775M | 160G corpus | T0++ Train |
## Citation
If you find the code and models useful for your research, please cite the following paper:
```
@misc{gong2023modelgenerated,
title={Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers},
author={Linyuan Gong and Chenyan Xiong and Xiaodong Liu and Payal Bajaj and Yiqing Xie and Alvin Cheung and Jianfeng Gao and Xia Song},
year={2023},
eprint={2305.12567},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2305.12567}
}
``` |
rodriguezj314/pixel_train_model_beta_hoodie_gap | rodriguezj314 | 2023-08-18T02:26:35Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-08-18T02:08:30Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks hoodie
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - rodriguezj314/pixel_train_model_beta_hoodie_gap
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks hoodie using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
hoangdeeptry/whisper-vietnamese-3 | hoangdeeptry | 2023-08-18T02:23:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:duytran3112/whisper-sm-vivos",
"base_model:finetune:duytran3112/whisper-sm-vivos",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-17T15:39:28Z | ---
license: apache-2.0
base_model: duytran3112/whisper-sm-vivos
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-vietnamese-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-vietnamese-3
This model is a fine-tuned version of [duytran3112/whisper-sm-vivos](https://huggingface.co/duytran3112/whisper-sm-vivos) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6164
- Wer: 104.1257
- Cer: 100.8264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.0204 | 7.19 | 1000 | 0.4900 | 123.6149 | 105.0861 |
| 0.0022 | 14.39 | 2000 | 0.5671 | 111.9607 | 104.7015 |
| 0.0009 | 21.58 | 3000 | 0.6017 | 109.1316 | 101.5308 |
| 0.0006 | 28.78 | 4000 | 0.6164 | 104.1257 | 100.8264 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-0 | BenjaminOcampo | 2023-08-18T02:16:38Z | 3 | 0 | transformers | [
"transformers",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T02:15:48Z | ---
language: en
---
# Model Card for BenjaminOcampo/model-contrastive-bert__trained-in-dynahate__seed-0
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** BenjaminOcampo
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/huggingface_hub
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0sunfire0/a2c-PandaReachDense-v2 | 0sunfire0 | 2023-08-18T01:57:22Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"arxiv:2106.13687",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-15T22:28:39Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.68 +/- 0.53
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
Panda Gym environments: [arxiv.org/abs/2106.13687](https://arxiv.org/abs/2106.13687) |
abdiharyadi/indobart-v2-amr-to-text-linearized-penman-ilmy | abdiharyadi | 2023-08-18T01:53:25Z | 174 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:indobenchmark/indobart-v2",
"base_model:finetune:indobenchmark/indobart-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-18T01:43:12Z | ---
license: mit
base_model: indobenchmark/indobart-v2
tags:
- generated_from_trainer
model-index:
- name: indobart-v2-amr-to-text-linearized-penman-ilmy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobart-v2-amr-to-text-linearized-penman-ilmy
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 331 | 0.1928 |
| 0.2519 | 2.0 | 662 | 0.1978 |
| 0.2519 | 3.0 | 993 | 0.1927 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
reachosen/autotrain-in-basket-3.42-83100142189 | reachosen | 2023-08-18T01:31:02Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:reachosen/autotrain-data-in-basket-3.42",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-18T01:29:17Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- reachosen/autotrain-data-in-basket-3.42
co2_eq_emissions:
emissions: 0.7228932272231364
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 83100142189
- CO2 Emissions (in grams): 0.7229
## Validation Metrics
- Loss: 0.584
- Accuracy: 0.851
- Macro F1: 0.841
- Micro F1: 0.851
- Weighted F1: 0.847
- Macro Precision: 0.851
- Micro Precision: 0.851
- Weighted Precision: 0.853
- Macro Recall: 0.843
- Micro Recall: 0.851
- Weighted Recall: 0.851
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/reachosen/autotrain-in-basket-3.42-83100142189
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("reachosen/autotrain-in-basket-3.42-83100142189", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("reachosen/autotrain-in-basket-3.42-83100142189", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
nightdude/config_5111 | nightdude | 2023-08-18T01:27:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T01:25:34Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
f4falalu/q-taxis | f4falalu | 2023-08-18T01:15:02Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T01:14:59Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxis
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="f4falalu/q-taxis", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
f4falalu/q-FrozenLake-v1-4x4-noSlippery | f4falalu | 2023-08-18T01:08:54Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-08-18T00:30:59Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="f4falalu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
scarlett623/wav2vec2-timit-xls-r-53-wandb-colab | scarlett623 | 2023-08-18T00:56:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-08-17T16:06:19Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-timit-xls-r-53-wandb-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-timit-xls-r-53-wandb-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3325
- Wer: 0.2897
- Cer: 0.0940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| No log | 0.69 | 400 | 3.1507 | 1.0 | 0.9806 |
| 4.3857 | 1.38 | 800 | 3.0109 | 1.0 | 0.9806 |
| 2.6835 | 2.08 | 1200 | 0.6181 | 0.5756 | 0.1795 |
| 0.9327 | 2.77 | 1600 | 0.4239 | 0.4718 | 0.1456 |
| 0.5602 | 3.46 | 2000 | 0.3691 | 0.4141 | 0.1301 |
| 0.5602 | 4.15 | 2400 | 0.3386 | 0.3894 | 0.1231 |
| 0.4407 | 4.84 | 2800 | 0.3122 | 0.3676 | 0.1177 |
| 0.3437 | 5.54 | 3200 | 0.3149 | 0.3601 | 0.1152 |
| 0.3154 | 6.23 | 3600 | 0.3146 | 0.3495 | 0.1119 |
| 0.267 | 6.92 | 4000 | 0.3039 | 0.3427 | 0.1089 |
| 0.267 | 7.61 | 4400 | 0.3313 | 0.3409 | 0.1092 |
| 0.2354 | 8.3 | 4800 | 0.2986 | 0.3365 | 0.1064 |
| 0.2191 | 9.0 | 5200 | 0.3235 | 0.3353 | 0.1074 |
| 0.1937 | 9.69 | 5600 | 0.3117 | 0.3320 | 0.1071 |
| 0.1803 | 10.38 | 6000 | 0.3102 | 0.3233 | 0.1040 |
| 0.1803 | 11.07 | 6400 | 0.3176 | 0.3196 | 0.1030 |
| 0.1635 | 11.76 | 6800 | 0.3166 | 0.3220 | 0.1036 |
| 0.1551 | 12.46 | 7200 | 0.2836 | 0.3160 | 0.1021 |
| 0.1566 | 13.15 | 7600 | 0.3146 | 0.3186 | 0.1032 |
| 0.1424 | 13.84 | 8000 | 0.3392 | 0.3167 | 0.1036 |
| 0.1424 | 14.53 | 8400 | 0.3254 | 0.3109 | 0.1001 |
| 0.1379 | 15.22 | 8800 | 0.3249 | 0.3127 | 0.1009 |
| 0.1192 | 15.92 | 9200 | 0.3408 | 0.3119 | 0.1010 |
| 0.1178 | 16.61 | 9600 | 0.3551 | 0.3061 | 0.0997 |
| 0.1112 | 17.3 | 10000 | 0.3250 | 0.3059 | 0.0991 |
| 0.1112 | 17.99 | 10400 | 0.3127 | 0.3037 | 0.0983 |
| 0.1022 | 18.69 | 10800 | 0.3370 | 0.3067 | 0.0994 |
| 0.1031 | 19.38 | 11200 | 0.3351 | 0.3048 | 0.0991 |
| 0.0926 | 20.07 | 11600 | 0.3433 | 0.2994 | 0.0974 |
| 0.0861 | 20.76 | 12000 | 0.3145 | 0.3003 | 0.0971 |
| 0.0861 | 21.45 | 12400 | 0.3367 | 0.2980 | 0.0973 |
| 0.0935 | 22.15 | 12800 | 0.3139 | 0.3016 | 0.0986 |
| 0.0784 | 22.84 | 13200 | 0.3181 | 0.2990 | 0.0972 |
| 0.078 | 23.53 | 13600 | 0.3347 | 0.2938 | 0.0961 |
| 0.0761 | 24.22 | 14000 | 0.3371 | 0.2921 | 0.0949 |
| 0.0761 | 24.91 | 14400 | 0.3274 | 0.2916 | 0.0952 |
| 0.0784 | 25.61 | 14800 | 0.3152 | 0.2927 | 0.0942 |
| 0.0714 | 26.3 | 15200 | 0.3237 | 0.2924 | 0.0943 |
| 0.0671 | 26.99 | 15600 | 0.3183 | 0.2914 | 0.0945 |
| 0.0684 | 27.68 | 16000 | 0.3307 | 0.2931 | 0.0950 |
| 0.0684 | 28.37 | 16400 | 0.3383 | 0.2913 | 0.0940 |
| 0.07 | 29.07 | 16800 | 0.3318 | 0.2901 | 0.0940 |
| 0.0624 | 29.76 | 17200 | 0.3325 | 0.2897 | 0.0940 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
IngeniousArtist/llama2-finance | IngeniousArtist | 2023-08-18T00:54:31Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-08-09T04:32:51Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
model-index:
- name: llama2-finance
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-finance
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the financial_phrasebank dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 20
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Noct-Blib/lora-trained-xl-colab | Noct-Blib | 2023-08-18T00:49:48Z | 4 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-08-17T23:12:39Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: zkz
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Noct-Blib/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on zkz using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
mekjr1/opus-mt-en-es-finetuned-es-to-pbb-v2 | mekjr1 | 2023-08-18T00:45:04Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-08-17T03:20:54Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-finetuned-es-to-pbb-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-finetuned-es-to-pbb-v2
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6535
- Bleu: 1.2729
- Gen Len: 90.5316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 199 | 2.3626 | 0.171 | 109.5972 |
| No log | 2.0 | 398 | 2.0302 | 0.3065 | 95.3081 |
| 2.712 | 3.0 | 597 | 1.8861 | 0.7019 | 96.8497 |
| 2.712 | 4.0 | 796 | 1.8081 | 0.6924 | 93.4432 |
| 2.712 | 5.0 | 995 | 1.7496 | 0.9599 | 90.7563 |
| 1.942 | 6.0 | 1194 | 1.7133 | 1.0843 | 92.4646 |
| 1.942 | 7.0 | 1393 | 1.6859 | 1.1072 | 92.8725 |
| 1.7861 | 8.0 | 1592 | 1.6696 | 1.243 | 91.2184 |
| 1.7861 | 9.0 | 1791 | 1.6569 | 1.2595 | 90.1641 |
| 1.7861 | 10.0 | 1990 | 1.6535 | 1.2729 | 90.5316 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
gang21/llama2-icd10-common2 | gang21 | 2023-08-18T00:34:00Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T00:33:57Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
asenella/incomplete_mhd_MMVAE_beta_5_scale_True_seed_2 | asenella | 2023-08-18T00:28:35Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-13T22:50:11Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
asenella/incomplete_mhd_MMVAE_beta_5_scale_True_seed_3 | asenella | 2023-08-18T00:27:50Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-13T23:02:14Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
jordyvl/vit-base_rvl_cdip_crl_softmax_rank1_fixed | jordyvl | 2023-08-18T00:26:47Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-31T11:36:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl_cdip_crl_softmax_rank1_fixed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip_crl_softmax_rank1_fixed
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7091
- Accuracy: 0.9032
- Brier Loss: 0.1756
- Nll: 1.0964
- F1 Micro: 0.9032
- F1 Macro: 0.9033
- Ece: 0.0854
- Aurc: 0.0188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.2289 | 1.0 | 10000 | 0.4298 | 0.8826 | 0.1794 | 1.2461 | 0.8826 | 0.8841 | 0.0553 | 0.0199 |
| 0.1972 | 2.0 | 20000 | 0.4350 | 0.8859 | 0.1769 | 1.3140 | 0.8859 | 0.8862 | 0.0558 | 0.0197 |
| 0.1414 | 3.0 | 30000 | 0.4423 | 0.8938 | 0.1702 | 1.2433 | 0.8938 | 0.8948 | 0.0639 | 0.0181 |
| 0.0903 | 4.0 | 40000 | 0.5076 | 0.8943 | 0.1753 | 1.2033 | 0.8943 | 0.8941 | 0.0766 | 0.0181 |
| 0.0684 | 5.0 | 50000 | 0.5592 | 0.8963 | 0.1783 | 1.2422 | 0.8963 | 0.8965 | 0.0811 | 0.0194 |
| 0.0313 | 6.0 | 60000 | 0.6384 | 0.8956 | 0.1836 | 1.2359 | 0.8957 | 0.8957 | 0.0861 | 0.0218 |
| 0.0163 | 7.0 | 70000 | 0.6673 | 0.9005 | 0.1788 | 1.1927 | 0.9005 | 0.9006 | 0.0855 | 0.0215 |
| 0.0104 | 8.0 | 80000 | 0.6929 | 0.9001 | 0.1791 | 1.1768 | 0.9001 | 0.9000 | 0.0860 | 0.0204 |
| 0.0036 | 9.0 | 90000 | 0.7131 | 0.9018 | 0.1780 | 1.1295 | 0.9018 | 0.9018 | 0.0866 | 0.0195 |
| 0.0023 | 10.0 | 100000 | 0.7091 | 0.9032 | 0.1756 | 1.0964 | 0.9032 | 0.9033 | 0.0854 | 0.0188 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
asenella/incomplete_mhd_MMVAE_beta_5_scale_True_seed_0 | asenella | 2023-08-18T00:25:38Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-10T15:07:29Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
KingKazma/cnn_dailymail_gpt2_lora_500_4_50000_8_e3_s6789_v4_l5_r2 | KingKazma | 2023-08-18T00:18:25Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-18T00:18:23Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
ihgn/gpt2-paraphrase | ihgn | 2023-08-18T00:10:26Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-06-11T16:25:56Z | ---
language:
- en
metrics:
- bleu
- rouge
pipeline_tag: text2text-generation
---
def paraphrase(
question,
num_beams=5,
num_beam_groups=5,
num_return_sequences=1,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=128
):
input_ids = tokenizer(
f'paraphrase: {question}',
return_tensors="pt",
padding="longest",
max_length=max_length,
truncation=True,
).input_ids
outputs = model.generate(
input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
num_beams=num_beams, num_beam_groups=num_beam_groups,
max_length=max_length, diversity_penalty=diversity_penalty
)
res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return res |
yummyummy/bark-cpp-ggml-models-test | yummyummy | 2023-08-17T23:38:02Z | 0 | 0 | null | [
"region:us"
] | null | 2023-08-17T08:23:28Z | initial test on https://github.com/PABannier/bark.cpp/commit/58d24ea9f1836247d2aee520d2056c46a6f09c5a
73s 10.3g ram q4 f16mixed
50s 10.2g ram f16 mixed
67s 12.1g f32
|
siacus/llama-v2-huff-test | siacus | 2023-08-17T23:19:01Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-08-17T23:05:10Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Subsets and Splits