modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 00:43:14
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 461
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 00:42:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
GeorgeDam/ppo-Huggy | GeorgeDam | 2023-07-06T23:44:03Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-06T23:43:59Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GeorgeDam/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
garrettbaber/twitter-roberta-base-anger-intensity | garrettbaber | 2023-07-06T23:29:21Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"text-regression",
"anger",
"emotion",
"emotion intensity",
"unk",
"dataset:SemEval-2018-Task-1-Text-Regression-Task",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T23:19:12Z | ---
tags:
- text-regression
- anger
- emotion
- emotion intensity
language:
- unk
widget:
- text: I am furious
datasets:
- SemEval-2018-Task-1-Text-Regression-Task
co2_eq_emissions:
emissions: 0.030118000944741423
---
# twitter-roberta-base-anger-intensity
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression).
Warning: Hosted inference API produces inaccurate values
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 72775139028
- CO2 Emissions (in grams): 0.0301
## Validation Metrics
- Loss: 0.011
- MSE: 0.011
- MAE: 0.085
- R2: 0.641
- RMSE: 0.103
- Explained Variance: 0.641
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I am furious"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-anger-intensity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-anger-intensity")
tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-anger-intensity")
inputs = tokenizer("I am furious", return_tensors="pt")
outputs = model(**inputs)
``` |
AbduBot/dqn-SpaceInvadersNoFrameskip-v4 | AbduBot | 2023-07-06T23:27:22Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T23:26:45Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 653.50 +/- 202.04
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AbduBot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AbduBot -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AbduBot
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
YakovElm/Hyperledger_15_BERT_More_Properties | YakovElm | 2023-07-06T23:25:07Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T23:24:32Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger_15_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger_15_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3137
- Train Accuracy: 0.9035
- Validation Loss: 0.3679
- Validation Accuracy: 0.8807
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3228 | 0.8993 | 0.3659 | 0.8807 | 0 |
| 0.3217 | 0.9035 | 0.3648 | 0.8807 | 1 |
| 0.3137 | 0.9035 | 0.3679 | 0.8807 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zwpython/zw-chinese-vicuna-7B-v1.3 | zwpython | 2023-07-06T23:17:43Z | 0 | 1 | null | [
"region:us"
] | null | 2023-07-06T23:12:58Z | 全球首发,vicuna-7B-v1.3中文ok,母版是vicuna-7B-v1.3正式版。
更多参见:https://github.com/ziwang-com/chinese-StableVicuna 和:zw公众号
为响应国家AI大战略需求,提高国内AI、GPT初创团队的竞争力,不要输在起跑线上。
zw-vicuna系列zw中文汉化版,首度提供免费下载通道。 百度网盘提取码:hiks
链接:https://pan.baidu.com/s/1EH19ablXVLYQP1f-IaPS-Q?pwd=hiks
如有更改,最新下载地址请参见QQ群文件:655402626(GPT+千人QQ大群)
zw-vicuna中文汉化版,模型文件是ggml版格式
cpu+gpu版本,llamacpp运行,win,linux,mac-os通吃。
具体细节参见:https://github.com/ggerganov/llama.cpp
Prompt template提示词模板:
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
USER: prompt
ASSISTANT:
更多细节和技术参数,参见:
官方原版:https://huggingface.co/lmsys/vicuna-7b-v1.3
Github项目: https://github.com/ziwang-com/chinese-StableVicuna |
nferroukhi/ufalcon-7b-guanaco-lora | nferroukhi | 2023-07-06T23:13:55Z | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-06-29T12:15:05Z | ---
tags:
- generated_from_trainer
model-index:
- name: ufalcon-7B-guanaco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ufalcon-7B-guanaco
This model is a fine-tuned version of [nferroukhi/WizardLM-Uncensored-Falcon-7b-sharded-bf16](https://huggingface.co/nferroukhi/WizardLM-Uncensored-Falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zwpython/zw-chinese-vicuna-33B | zwpython | 2023-07-06T23:13:24Z | 0 | 4 | null | [
"region:us"
] | null | 2023-06-24T11:08:39Z | 全球首发,vicuna-33B-v1.3中文ok,母版是vicuna33B正式版。
v33模型文件33G,RTX4090单卡无法运行。测试用的是:12代i9,64G内存,基本上满载。
因为是母版也是测试版,没有做矢量压缩版,等正式版本发布再说,目前只是测试技术工程流程ok。
更多参见:https://github.com/ziwang-com/chinese-StableVicuna 和:zw公众号
为响应国家AI大战略需求,提高国内AI、GPT初创团队的竞争力,不要输在起跑线上。
zw-vicuna-33B-cn中文汉化版,首度提供免费下载通道。
zw-vicuna-33B中文版 百度网盘提取码:hiks
链接:https://pan.baidu.com/s/1EH19ablXVLYQP1f-IaPS-Q?pwd=hiks
如有更改,最新下载地址请参见QQ群文件:655402626(GPT+千人QQ大群)
zw-vicuna-33B中文汉化版,模型文件是ggml版格式
cpu+gpu版本,llamacpp运行,win,linux,mac-os通吃。
具体细节参见:https://github.com/ggerganov/llama.cpp
Prompt template提示词模板:
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
USER: prompt
ASSISTANT:
更多细节和技术参数,参见:
官方原版:https://huggingface.co/lmsys/vicuna-33b-v1.3
Github项目: https://github.com/ziwang-com/chinese-StableVicuna
|
garrettbaber/twitter-roberta-base-sadness-intensity | garrettbaber | 2023-07-06T23:08:47Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"text-regression",
"sadness",
"emotion",
"emotion intensity",
"unk",
"dataset:SemEval-2018-Task-1-Text-Regression-Task",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T23:04:50Z | ---
tags:
- text-regression
- sadness
- emotion
- emotion intensity
language:
- unk
widget:
- text: I'm feeling down
datasets:
- SemEval-2018-Task-1-Text-Regression-Task
co2_eq_emissions:
emissions: 0.025884770512937715
---
# twitter-roberta-base-sadness-intensity
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression).
Warning: Hosted inference API produces inaccurate values
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 72772139027
- CO2 Emissions (in grams): 0.0259
## Validation Metrics
- Loss: 0.011
- MSE: 0.011
- MAE: 0.079
- R2: 0.726
- RMSE: 0.103
- Explained Variance: 0.727
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I'm feeling down"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-sadness-intensity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-sadness-intensity")
tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-sadness-intensity")
inputs = tokenizer("I'm feeling down", return_tensors="pt")
outputs = model(**inputs)
``` |
andmusician/WizardLM-7B-GPTQ | andmusician | 2023-07-06T23:01:47Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-04T11:47:08Z | ---
license: apache-2.0
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
inference: false
---
# WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct
These files are GPTQ 4bit model files for [Eric Hartford's 'uncensored' version of WizardLM](https://huggingface.co/ehartford/WizardLM-7B-Uncensored).
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
Eric did a fresh 7B training using the WizardLM method, on [a dataset edited to remove all the "I'm sorry.." type ChatGPT responses](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered).
## Other repositories available
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ)
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGML)
* [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
## How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-7B-uncensored-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-7B-uncensored-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**Compatible file - WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors**
In the `main` branch - the default one - you will find `WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors`
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
* `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128g. No act-order.
* Command used to create the GPTQ:
```
python llama.py models/ehartford_WizardLM-7B-Uncensored c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/eric-gptq/WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors
```
# Eric's original model card
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out, including Rohan, TheBloke, and Caseus
# WizardLM's original model card
Overview of Evol-Instruct
Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs.

 |
Meina/MeinaUnreal_V4 | Meina | 2023-07-06T23:01:40Z | 31 | 3 | diffusers | [
"diffusers",
"safetensors",
"art",
"anime",
"meina",
"unreal",
"semirealistic",
"2.5d",
"sexy",
"fantasy",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T22:44:41Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
- anime
- meina
- unreal
- semirealistic
- 2.5d
- sexy
- fantasy
---
MeinaUnreal objetive is to be able to do anime art with a 2.5d feeling.
( the VAE is already baked in the model )
For examples and prompts, please checkout: https://civitai.com/models/18798/meinaunreal
I have a discord server where you can post images that you generated, discuss prompt and/or ask for help.
https://discord.gg/XC9nGZNDUd If you like one of my models and want to support their updates
I've made a ko-fi page; https://ko-fi.com/meina where you can pay me a coffee <3
And a Patreon page; https://www.patreon.com/MeinaMix where you can support me and get acess to beta of my models!
You may also try this model using Sinkin.ai: https://sinkin.ai/m/PREaKGN
Recommendations of use: Enable Quantization in K samplers.
Hires.fix is needed for prompts where the character is far away in order to make decent images, it drastically improve the quality of face and eyes!
Recommended parameters:
Sampler: DPM++ 2M Karras: 20 to 40 steps.
Sampler: DPM++ SDE Karras: 20 to 30 steps.
CFG Scale: 7.
Resolutions: 512x768, 512x1024 for Portrait!
Resolutions: 768x512, 1024x512, 1536x512 for Landscape!
Hires.fix: R-ESRGAN 4x+Anime6b, with 15 steps at 0.3 denoising.
Clip Skip: 2.
Negatives: ' (worst quality, low quality:1.4), monochrome, zombie, (interlocked fingers), ' |
garrettbaber/twitter-roberta-base-joy-intensity | garrettbaber | 2023-07-06T22:58:25Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"text-regression",
"joy",
"emotion",
"emotion intensity",
"en",
"dataset:SemEval-2018-Task-1-Text-Regression-Task",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T22:47:30Z | ---
tags:
- text-regression
- joy
- emotion
- emotion intensity
language:
- en
widget:
- text: I am elated!
datasets:
- SemEval-2018-Task-1-Text-Regression-Task
co2_eq_emissions:
emissions: 0.03988347977318191
---
# twitter-roberta-base-joy-intensity
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-2022-154m on the SemEval 2018 - Task 1 Affect in Tweets (subtask: El-reg / text regression).
Warning: Hosted inference API produces inaccurate values
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 72771139026
- CO2 Emissions (in grams): 0.0399
## Validation Metrics
- Loss: 0.013
- MSE: 0.013
- MAE: 0.088
- R2: 0.707
- RMSE: 0.116
- Explained Variance: 0.709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I am elated!"}' https://api-inference.huggingface.co/models/garrettbaber/twitter-roberta-base-joy-intensity
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("garrettbaber/twitter-roberta-base-joy-intensity")
tokenizer = AutoTokenizer.from_pretrained("garrettbaber/twitter-roberta-base-joy-intensity")
inputs = tokenizer("I am elated!", return_tensors="pt")
outputs = model(**inputs)
``` |
Shularp/Helsinki_en-mul_test | Shularp | 2023-07-06T22:13:45Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-04T07:48:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: Helsinki_en-mul_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Helsinki_en-mul_test
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4474 | 1.0 | 22199 | 1.6242 |
| 1.8308 | 2.0 | 44398 | 1.4888 |
| 1.3957 | 3.0 | 66597 | 1.4573 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jaderfigueredo/distilbert-base-uncased-finetuned-cola | jaderfigueredo | 2023-07-06T22:03:25Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T12:22:59Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jaderfigueredo/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jaderfigueredo/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1906
- Validation Loss: 0.5356
- Train Matthews Correlation: 0.5288
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5205 | 0.4630 | 0.4449 | 0 |
| 0.3218 | 0.4463 | 0.5481 | 1 |
| 0.1906 | 0.5356 | 0.5288 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_random_usrb | aroot | 2023-07-06T21:56:18Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T21:39:51Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_random_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_random_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8937
- Bleu: 4.2688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_central_usrb | aroot | 2023-07-06T21:50:10Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T21:31:40Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_central_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_central_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1496
- Bleu: 31.8498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/Tulu-7B-SuperHOT-8K-GGML | TheBloke | 2023-07-06T21:49:56Z | 0 | 1 | null | [
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"arxiv:2304.03277",
"license:other",
"region:us"
] | null | 2023-07-06T17:51:17Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 7B GGML
These files are GGML format model files for [Allen AI's Tulu 7B](https://huggingface.co/TheBloke/tulu-7B-fp16).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/allenai/tulu-7b)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| tulu-7b-superhot-8k.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| tulu-7b-superhot-8k.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| tulu-7b-superhot-8k.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| tulu-7b-superhot-8k.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| tulu-7b-superhot-8k.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 tulu-7b-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Allen AI's Tulu 7B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 7B fp16
These files are pytorch format fp16 model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-7B-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
## Prompt template
The following template should be used:
```
<|user|>
prompt goes here
<|assistant|>
```
**Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly.
In other words, the prompt is:
```
<|user|>\nprompt goes here\n<|assistant|>\n
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Allen AI's Tulu 7B
# Tulu 7B
This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
JoshELambert/weakgov | JoshELambert | 2023-07-06T21:47:24Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-07-06T21:21:31Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp794d5c0_/JoshELambert/weakgov
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp794d5c0_/JoshELambert/weakgov")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
nkpz/bayling-13b-v1.1-gptq-32g | nkpz | 2023-07-06T21:40:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T21:11:59Z | ---
license: gpl-3.0
---
4-bit (32 groupsize) quantized files for [ICTNLP/bayling-13b-v1.1](https://huggingface.co/ICTNLP/bayling-13b-v1.1)
`BayLing (百聆, bǎi líng) is an instruction-following LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.`
Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors /my/output/file.safetensors |
am-infoweb/question-answering-roberta-anu-SQuAD-v2 | am-infoweb | 2023-07-06T21:32:52Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"Question Answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-06T21:27:17Z | ---
license: apache-2.0
tags:
- Question Answering
metrics:
- squad
model-index:
- name: anuragsingh28/question-answering-roberta-anu-s-v2
results: []
---
# Question Answering
The model is intended to be used for Q&A task, given the question & context, the model would attempt to infer the answer text, answer span & confidence score.<br>
Model is encoder-only (deepset/roberta-base-squad2) with QuestionAnswering LM Head, fine-tuned on SQUADx dataset with **exact_match:** 84.83 & **f1:** 91.80 performance scores.
Please follow this link for [Encoder based Question Answering V1](https://huggingface.co/anuragsingh28/question-answering-roberta-anu-s-v2/)
Example code:
```
from transformers import pipeline
model_checkpoint = "anuragsingh28/question-answering-roberta-anu-s-v2"
context = """
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""
question = "Which deep learning libraries back 🤗 Transformers?"
question_answerer = pipeline("question-answering", model=model_checkpoint)
question_answerer(question=question, context=context)
```
## Training and evaluation data
SQUAD Split
## Training procedure
Preprocessing:
1. SQUAD Data longer chunks were sub-chunked with input context max-length 384 tokens and stride as 128 tokens.
2. Target answers readjusted for sub-chunks, sub-chunks with no-answers or partial answers were set to target answer span as (0,0)
Metrics:
1. Adjusted accordingly to handle sub-chunking.
2. n best = 20
3. skip answers with length zero or higher than max answer length (30)
### Training hyperparameters
Custom Training Loop:
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
{'exact_match': 84.83443708609272, 'f1': 91.79987545811638}
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.0
|
HeshamMamdouh/mt5-small-v2-sum-fine-tuned | HeshamMamdouh | 2023-07-06T21:25:34Z | 61 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-06T21:22:53Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mt5-small-v2-sum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mt5-small-v2-sum-fine-tuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.7918
- Validation Loss: 9.1352
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 14.5286 | 12.5993 | 0 |
| 13.7167 | 12.7206 | 1 |
| 13.7518 | 12.4721 | 2 |
| 13.5991 | 12.0654 | 3 |
| 13.0693 | 11.5237 | 4 |
| 12.8718 | 11.5755 | 5 |
| 12.6745 | 11.3361 | 6 |
| 12.4659 | 10.6694 | 7 |
| 12.2692 | 10.0483 | 8 |
| 12.2115 | 10.5089 | 9 |
| 11.9810 | 10.3895 | 10 |
| 11.6432 | 10.1090 | 11 |
| 11.6436 | 9.4868 | 12 |
| 11.3711 | 9.9035 | 13 |
| 11.1223 | 8.9180 | 14 |
| 10.9886 | 9.3682 | 15 |
| 10.8426 | 8.9964 | 16 |
| 10.5593 | 9.2168 | 17 |
| 10.5568 | 8.9877 | 18 |
| 10.1875 | 8.8072 | 19 |
| 10.1814 | 10.3268 | 20 |
| 10.0053 | 11.1192 | 21 |
| 9.6850 | 10.9950 | 22 |
| 9.6080 | 10.7909 | 23 |
| 9.4208 | 10.9226 | 24 |
| 9.3501 | 10.1040 | 25 |
| 9.2757 | 10.1148 | 26 |
| 9.1751 | 9.9607 | 27 |
| 8.9227 | 9.1899 | 28 |
| 8.7918 | 9.1352 | 29 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.12.1
|
aroot/eng-guj-simcse_random_usrb | aroot | 2023-07-06T21:19:25Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T20:57:54Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random_usrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random_usrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2995
- Bleu: 2.6979
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cjohlmacher/unit2-taxi-focused | cjohlmacher | 2023-07-06T21:13:18Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T21:13:16Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2-taxi-focused
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.24 +/- 2.80
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-focused", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aroot/eng-mya-simcse_central_usblu | aroot | 2023-07-06T21:12:43Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T20:52:09Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_central_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_central_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8855
- Bleu: 4.1385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cjohlmacher/unit2-taxi-explorer | cjohlmacher | 2023-07-06T21:12:20Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T20:28:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2-taxi-explorer
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-explorer", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BubbleJoe/swin-tiny-patch4-window7-224-finetuned-eurosat | BubbleJoe | 2023-07-06T20:47:28Z | 213 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-06T20:30:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9748148148148148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0742
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2967 | 1.0 | 190 | 0.1191 | 0.9622 |
| 0.1776 | 2.0 | 380 | 0.0897 | 0.9719 |
| 0.1334 | 3.0 | 570 | 0.0742 | 0.9748 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
espnet/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw | espnet | 2023-07-06T20:38:53Z | 1 | 0 | espnet | [
"espnet",
"audio",
"self-supervised-learning",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-12-31T03:54:13Z | ---
tags:
- espnet
- audio
- self-supervised-learning
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 SSL model
### `simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 753f40d61813436d4e76660904d02eaed7a6649e
pip install -e .
cd egs2/librispeech/ssl1
./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
```
## SSL config
<details><summary>expand</summary>
```
config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it0.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/hubert_iter0_train_ssl_torchaudiohubert_base_960h_pretrain_it0_raw
ngpu: 1
seed: 0
num_workers: 64
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45091
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 250
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 48000000
valid_batch_bins: null
train_shape_file:
- exp/hubert_iter0_stats_raw/train/speech_shape
- exp/hubert_iter0_stats_raw/train/text_shape.word
valid_shape_file:
- exp/hubert_iter0_stats_raw/valid/speech_shape
- exp/hubert_iter0_stats_raw/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 400
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960/wav.scp
- speech
- sound
- - dump/raw/train_960/text.km.kmeans_iter0_mfcc_train_960_portion0.1
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text.km.kmeans_iter0_mfcc_train_960_portion0.1
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 32000
token_list:
- '81'
- '5'
- '79'
- '84'
- '27'
- '35'
- '67'
- '56'
- '10'
- '99'
- '24'
- '3'
- '48'
- '8'
- '42'
- '16'
- '32'
- '31'
- '47'
- '43'
- '20'
- '73'
- '49'
- '86'
- '18'
- '64'
- '34'
- '59'
- '95'
- '0'
- '52'
- '44'
- '61'
- '57'
- '30'
- '1'
- '93'
- '6'
- '69'
- '19'
- '7'
- '65'
- '28'
- '89'
- '2'
- '96'
- '91'
- '72'
- '38'
- '78'
- '26'
- '13'
- '39'
- '94'
- '4'
- '88'
- '85'
- '51'
- '82'
- '41'
- '50'
- '21'
- '80'
- '97'
- '87'
- '25'
- '54'
- '12'
- '40'
- '60'
- '29'
- '11'
- '53'
- '71'
- '83'
- '74'
- '68'
- '55'
- '62'
- '76'
- '45'
- '75'
- '92'
- '46'
- '36'
- '66'
- '22'
- '77'
- '23'
- '63'
- '37'
- '58'
- '33'
- '15'
- '17'
- '90'
- '98'
- '14'
- '70'
- '9'
- <unk>
- <sos/eos>
init: null
collate_fn_conf:
label_downsampling: 2
pad: false
rand_crop: true
input_size: 1
num_classes: 100
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
pred_masked_weight: 1.0
pred_nomask_weight: 0.0
loss_weights: 0.0
frontend: null
frontend_conf: {}
specaug: null
specaug_conf: {}
normalize: null
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: torchaudio_hubert
encoder_conf:
encoder_projection_dropout: 0.1
encoder_attention_dropout: 0.1
encoder_ff_interm_dropout: 0.0
encoder_dropout: 0.1
encoder_layer_drop: 0.05
model: torchaudio
model_conf: {}
required:
- output_dir
- token_list
version: '202209'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
WALIDALI/oumadvenly | WALIDALI | 2023-07-06T20:38:46Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T20:33:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### oumadvenly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
aroot/eng-guj-simcse_random_ssrb | aroot | 2023-07-06T20:32:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T20:10:52Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2802
- Bleu: 2.8939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cjohlmacher/unit2-taxi-overly-confident | cjohlmacher | 2023-07-06T20:24:09Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T20:24:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2-taxi-overly-confident
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cjohlmacher/unit2-taxi-overly-confident", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g085 | jordyvl | 2023-07-06T20:23:38Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T18:14:58Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g085
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g085
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3237
- Accuracy: 0.6425
- Exit 0 Accuracy: 0.1125
- Exit 1 Accuracy: 0.1625
- Exit 2 Accuracy: 0.385
- Exit 3 Accuracy: 0.555
- Exit 4 Accuracy: 0.6075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7611 | 0.1075 | 0.0875 | 0.0675 | 0.105 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7343 | 0.1125 | 0.07 | 0.065 | 0.1275 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6999 | 0.13 | 0.0775 | 0.06 | 0.1375 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6641 | 0.165 | 0.08 | 0.06 | 0.1225 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6228 | 0.1875 | 0.0875 | 0.0575 | 0.115 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5767 | 0.205 | 0.085 | 0.08 | 0.125 | 0.08 | 0.0625 |
| No log | 6.72 | 14 | 2.5435 | 0.2275 | 0.0925 | 0.0825 | 0.155 | 0.1075 | 0.0625 |
| No log | 7.72 | 16 | 2.5258 | 0.24 | 0.1 | 0.095 | 0.1825 | 0.115 | 0.065 |
| No log | 8.72 | 18 | 2.4842 | 0.28 | 0.1 | 0.0975 | 0.2 | 0.1225 | 0.0775 |
| No log | 9.72 | 20 | 2.4249 | 0.285 | 0.1 | 0.1325 | 0.1975 | 0.16 | 0.075 |
| No log | 10.72 | 22 | 2.3847 | 0.3075 | 0.115 | 0.13 | 0.21 | 0.175 | 0.065 |
| No log | 11.72 | 24 | 2.3568 | 0.3075 | 0.115 | 0.1175 | 0.215 | 0.24 | 0.09 |
| No log | 12.72 | 26 | 2.3226 | 0.31 | 0.1125 | 0.1275 | 0.21 | 0.2375 | 0.1075 |
| No log | 13.72 | 28 | 2.2816 | 0.3325 | 0.115 | 0.125 | 0.2225 | 0.23 | 0.1075 |
| No log | 14.72 | 30 | 2.2523 | 0.33 | 0.1125 | 0.1325 | 0.2325 | 0.2575 | 0.11 |
| No log | 15.72 | 32 | 2.2313 | 0.33 | 0.1175 | 0.135 | 0.235 | 0.2825 | 0.1125 |
| No log | 16.72 | 34 | 2.1980 | 0.3425 | 0.1125 | 0.1375 | 0.26 | 0.31 | 0.11 |
| No log | 17.72 | 36 | 2.1571 | 0.345 | 0.1075 | 0.1375 | 0.2625 | 0.32 | 0.1125 |
| No log | 18.72 | 38 | 2.1287 | 0.35 | 0.11 | 0.14 | 0.2925 | 0.3525 | 0.1125 |
| No log | 19.72 | 40 | 2.1027 | 0.36 | 0.11 | 0.14 | 0.315 | 0.3525 | 0.1075 |
| No log | 20.72 | 42 | 2.0783 | 0.3925 | 0.11 | 0.1425 | 0.32 | 0.33 | 0.1325 |
| No log | 21.72 | 44 | 2.0591 | 0.395 | 0.1125 | 0.14 | 0.3075 | 0.2975 | 0.1925 |
| No log | 22.72 | 46 | 2.0173 | 0.4175 | 0.1125 | 0.14 | 0.31 | 0.2975 | 0.2075 |
| No log | 23.72 | 48 | 1.9916 | 0.4275 | 0.1125 | 0.1425 | 0.32 | 0.3225 | 0.2175 |
| No log | 24.72 | 50 | 1.9750 | 0.4375 | 0.1125 | 0.1425 | 0.3225 | 0.3475 | 0.2175 |
| No log | 25.72 | 52 | 1.9224 | 0.465 | 0.1125 | 0.1425 | 0.33 | 0.3825 | 0.2225 |
| No log | 26.72 | 54 | 1.8714 | 0.4725 | 0.1125 | 0.1425 | 0.33 | 0.3875 | 0.23 |
| No log | 27.72 | 56 | 1.8340 | 0.5075 | 0.1125 | 0.1425 | 0.3275 | 0.395 | 0.255 |
| No log | 28.72 | 58 | 1.8065 | 0.52 | 0.115 | 0.1425 | 0.325 | 0.3975 | 0.33 |
| No log | 29.72 | 60 | 1.7904 | 0.5225 | 0.115 | 0.1475 | 0.3275 | 0.41 | 0.375 |
| No log | 30.72 | 62 | 1.7503 | 0.5225 | 0.115 | 0.15 | 0.3275 | 0.435 | 0.435 |
| No log | 31.72 | 64 | 1.7100 | 0.53 | 0.1125 | 0.1475 | 0.34 | 0.4575 | 0.4625 |
| No log | 32.72 | 66 | 1.6820 | 0.5375 | 0.1125 | 0.1475 | 0.3525 | 0.4575 | 0.495 |
| No log | 33.72 | 68 | 1.6534 | 0.5475 | 0.1125 | 0.1475 | 0.3575 | 0.4725 | 0.4975 |
| No log | 34.72 | 70 | 1.6230 | 0.575 | 0.1125 | 0.1475 | 0.3625 | 0.485 | 0.525 |
| No log | 35.72 | 72 | 1.5960 | 0.585 | 0.115 | 0.1475 | 0.3575 | 0.5125 | 0.55 |
| No log | 36.72 | 74 | 1.5728 | 0.575 | 0.11 | 0.1475 | 0.365 | 0.52 | 0.5525 |
| No log | 37.72 | 76 | 1.5469 | 0.585 | 0.11 | 0.1475 | 0.37 | 0.525 | 0.5525 |
| No log | 38.72 | 78 | 1.5219 | 0.5975 | 0.11 | 0.15 | 0.37 | 0.5275 | 0.56 |
| No log | 39.72 | 80 | 1.5011 | 0.61 | 0.1125 | 0.15 | 0.3725 | 0.5375 | 0.565 |
| No log | 40.72 | 82 | 1.4998 | 0.61 | 0.1125 | 0.155 | 0.37 | 0.53 | 0.57 |
| No log | 41.72 | 84 | 1.4762 | 0.6175 | 0.1125 | 0.155 | 0.375 | 0.53 | 0.575 |
| No log | 42.72 | 86 | 1.4490 | 0.6325 | 0.1125 | 0.1575 | 0.38 | 0.5375 | 0.585 |
| No log | 43.72 | 88 | 1.4278 | 0.62 | 0.1175 | 0.1575 | 0.375 | 0.54 | 0.59 |
| No log | 44.72 | 90 | 1.4169 | 0.63 | 0.115 | 0.1575 | 0.3775 | 0.54 | 0.585 |
| No log | 45.72 | 92 | 1.4143 | 0.6225 | 0.115 | 0.1575 | 0.385 | 0.535 | 0.5925 |
| No log | 46.72 | 94 | 1.3997 | 0.615 | 0.1175 | 0.1575 | 0.385 | 0.5425 | 0.6 |
| No log | 47.72 | 96 | 1.3838 | 0.6225 | 0.1175 | 0.1575 | 0.385 | 0.54 | 0.6025 |
| No log | 48.72 | 98 | 1.3737 | 0.625 | 0.115 | 0.16 | 0.39 | 0.545 | 0.5975 |
| No log | 49.72 | 100 | 1.3668 | 0.6375 | 0.1125 | 0.16 | 0.3925 | 0.55 | 0.6025 |
| No log | 50.72 | 102 | 1.3623 | 0.6375 | 0.1125 | 0.16 | 0.39 | 0.5525 | 0.6075 |
| No log | 51.72 | 104 | 1.3525 | 0.635 | 0.1125 | 0.16 | 0.39 | 0.5525 | 0.6125 |
| No log | 52.72 | 106 | 1.3455 | 0.6375 | 0.1125 | 0.16 | 0.395 | 0.555 | 0.61 |
| No log | 53.72 | 108 | 1.3390 | 0.64 | 0.1125 | 0.16 | 0.3925 | 0.5525 | 0.6075 |
| No log | 54.72 | 110 | 1.3338 | 0.645 | 0.11 | 0.16 | 0.3875 | 0.5525 | 0.605 |
| No log | 55.72 | 112 | 1.3298 | 0.645 | 0.1125 | 0.16 | 0.385 | 0.5525 | 0.61 |
| No log | 56.72 | 114 | 1.3262 | 0.645 | 0.1125 | 0.16 | 0.385 | 0.5525 | 0.6075 |
| No log | 57.72 | 116 | 1.3241 | 0.6425 | 0.115 | 0.16 | 0.385 | 0.5525 | 0.6075 |
| No log | 58.72 | 118 | 1.3237 | 0.6425 | 0.1125 | 0.1625 | 0.385 | 0.555 | 0.6075 |
| No log | 59.72 | 120 | 1.3237 | 0.6425 | 0.1125 | 0.1625 | 0.385 | 0.555 | 0.6075 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
whywynn/ppo-Huggy | whywynn | 2023-07-06T20:22:34Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-06T20:22:25Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: whywynn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
UGARIT/flair_grc_multi_ner | UGARIT | 2023-07-06T20:21:08Z | 7 | 1 | flair | [
"flair",
"pytorch",
"token-classification",
"ner",
"grc",
"region:us"
] | token-classification | 2022-11-05T08:42:03Z | ---
language:
- grc
tags:
- flair
- token-classification
- ner
widget:
- ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .
---
# Named Entity Recognition for Ancient Greek
Pretrained NER tagging model for ancient Greek
# Scores & Tagset
<details>
### Training
| | Precision | Recall | F1-score | Support |
|------|:---------:|:--------:|:--------:|:-------:|
| PER | 93.39% | 96.33% | 94.84% | 2127 |
| MISC | 84.69% | 92.50% | 88.42% | 933 |
| LOC | 89.55% | 77.32% | 82.99% | 388 |
### Evaluation
| | Precision | Recall | F1-score | Support |
|------|:---------:|:--------:|:--------:|:-------:|
| PER | 90.48% | 91.94% | 91.20% | 124 |
| MISC | 89.29% | 94.34% | 91.74% | 159 |
| LOC | 82.69% | 65.15% | 72.88% | 66 |
</details>
# Usage
```python
from flair.data import Sentence
from flair.models import SequenceTagger
tagger = SequenceTagger.load("UGARIT/flair_grc_bert_ner")
sentence = Sentence('ταῦτα εἴπας ὁ Ἀλέξανδρος παρίζει Πέρσῃ ἀνδρὶ ἄνδρα Μακεδόνα ὡς γυναῖκα τῷ λόγῳ · οἳ δέ , ἐπείτε σφέων οἱ Πέρσαι ψαύειν ἐπειρῶντο , διεργάζοντο αὐτούς .')
tagger.predict(sentence)
for entity in sentence.get_spans('ner'):
print(entity)
```
# Citation
*if you use this model, please consider citing [this work](https://www.researchgate.net/publication/365131651_Transformer-Based_Named_Entity_Recognition_for_Ancient_Greek):*
```latex
@unpublished{yousefetal22
author = "Yousef, Tariq and Palladino, Chiara and Jänicke, Stefan",
title = "Transformer-Based Named Entity Recognition for Ancient Greek",
year = {2022},
month = {11},
doi = "10.13140/RG.2.2.34846.61761"
url = {https://www.researchgate.net/publication/365131651_Transformer-Based_Named_Entity_Recognition_for_Ancient_Greek}
}
|
YakovElm/Apache_15_BERT_More_Properties | YakovElm | 2023-07-06T20:11:25Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T20:10:50Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_15_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_15_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1866
- Train Accuracy: 0.9542
- Validation Loss: 0.3421
- Validation Accuracy: 0.8924
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1956 | 0.9535 | 0.3893 | 0.8924 | 0 |
| 0.1874 | 0.9542 | 0.3804 | 0.8924 | 1 |
| 0.1866 | 0.9542 | 0.3421 | 0.8924 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
igoroliveira/distilbert-base-uncased-finetuned-cola | igoroliveira | 2023-07-06T20:09:07Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T19:11:37Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: igoroliveira/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# igoroliveira/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1959
- Validation Loss: 0.5357
- Train Matthews Correlation: 0.5177
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5247 | 0.4570 | 0.4887 | 0 |
| 0.3259 | 0.4597 | 0.5101 | 1 |
| 0.1959 | 0.5357 | 0.5177 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-mya-simcse_random_usblu | aroot | 2023-07-06T20:03:33Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T19:41:57Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_random_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_random_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8860
- Bleu: 4.2989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_random_ssrb | aroot | 2023-07-06T19:56:06Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T19:37:28Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random_ssrb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random_ssrb
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1502
- Bleu: 31.6328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ericNguyen0132/roberta-large-Dep-first | ericNguyen0132 | 2023-07-06T19:55:21Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-04T11:41:34Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-Dep-first
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep-first
This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1795
- Accuracy: 0.702
- F1: 0.5706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5563 | 1.0 | 751 | 0.5324 | 0.756 | 0.6188 |
| 0.4721 | 2.0 | 1502 | 0.6204 | 0.691 | 0.5874 |
| 0.3836 | 3.0 | 2253 | 0.7990 | 0.696 | 0.525 |
| 0.3245 | 4.0 | 3004 | 0.9714 | 0.694 | 0.5726 |
| 0.2795 | 5.0 | 3755 | 1.1795 | 0.702 | 0.5706 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nlphug/distilbert-base-uncased-finetuned-squad | nlphug | 2023-07-06T19:54:38Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-06T10:10:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_central_usblu | aroot | 2023-07-06T19:53:25Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T19:34:40Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_central_usblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_central_usblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1457
- Bleu: 32.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nkpz/Lawyer-Vicuna-200-gptq-32g | nkpz | 2023-07-06T19:53:24Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T19:35:31Z | ---
license: other
---
4-bit (32 groupsize) quantized files for [Devden/Lawyer-Vicuna-200](https://huggingface.co/Devden/Lawyer-Vicuna-200)
Quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
Command used to quantize: python llama.py /my/model/directory c4 --wbits 4 --true-sequential --act-order --groupsize 32 --save_safetensors /my/output/file.safetensors |
earentilt/ppo-LunarLander-v2 | earentilt | 2023-07-06T19:49:29Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T19:49:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.47 +/- 42.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wesleyacheng/angry-birds-classifier | wesleyacheng | 2023-07-06T19:34:39Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:tweet_eval",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-30T06:11:14Z | ---
license: apache-2.0
datasets:
- tweet_eval
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
widget:
- text: I hate you
example_title: Angry Example
- text: I love you
example_title: Not Angry Example
---
First posted in my [Kaggle](https://www.kaggle.com/code/wesleyacheng/angry-birds-classifier).
I love the **Angry Birds** game! I used to play it day and night nonstop!
I made an 😡🐦 **ANGRY BIRDS Classifier** to classify **ANGRY Tweets**!
Here, I used the [Twitter Emotion Dataset](https://huggingface.co/datasets/tweet_eval) and [BERT](https://huggingface.co/docs/transformers/model_doc/bert) using [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning) in native [PyTorch](https://en.wikipedia.org/wiki/PyTorch). |
GianniCatBug/fake-news-bert-base-spanish-wwm-cased | GianniCatBug | 2023-07-06T19:32:24Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T19:21:25Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fake-news-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake-news-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5154
- F1: 0.8957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5098 | 1.0 | 140 | 0.4978 | 0.7340 |
| 0.2473 | 2.0 | 280 | 0.3888 | 0.8829 |
| 0.0908 | 3.0 | 420 | 0.4420 | 0.8969 |
| 0.0332 | 4.0 | 560 | 0.5604 | 0.8796 |
| 0.0052 | 5.0 | 700 | 0.5154 | 0.8957 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
andressrg/textual_inversion_meal | andressrg | 2023-07-06T19:28:50Z | 16 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T18:45:40Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
RogerB/afriberta_small-finetuned-kintweetsB | RogerB | 2023-07-06T19:25:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T19:16:16Z | ---
tags:
- generated_from_trainer
model-index:
- name: afriberta_small-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_small-finetuned-kintweetsB
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1849
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6414 | 1.0 | 900 | 3.3043 |
| 3.3705 | 2.0 | 1800 | 3.1980 |
| 3.3101 | 3.0 | 2700 | 3.1867 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
carolanderson/roberta-base-food-ner | carolanderson | 2023-07-06T19:24:37Z | 259 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-07-06T04:46:47Z | ---
license: mit
language:
- en
library_name: transformers
---
# Model Card for Model ID carolanderson/roberta-base-food-ner
## Model Details
### Model Description
Model for tagging mentions of food in the text of recipes. Trained by fine tuning RoBERTa base on a set of about 300 hand-labeled recipes derived from [this dataset from Kaggle.](https://www.kaggle.com/hugodarwood/epirecipes). Achieves an F1 score 0f 0.96 on the custom validation set.
- **Developed by:** Carol Anderson
- **Shared by:** Carol Anderson
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** [roberta-base](https://huggingface.co/roberta-base)
### Model Sources
- **Repository:** [carolmanderson/food](https://github.com/carolmanderson/food/tree/master)
- **Demo:** [food-ner](https://huggingface.co/spaces/carolanderson/food-ner)
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model = AutoModelForTokenClassification.from_pretrained('carolanderson/roberta-base-food-ner')
tokenizer = AutoTokenizer.from_pretrained("roberta-base", add_prefix_space=True)
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Saute the onions in olive oil until browned."
results = nlp(example, aggregation_strategy="first")
``` |
hopkins/eng-mya-common.simcse.roberta-large | hopkins | 2023-07-06T19:17:19Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T18:56:39Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-common.simcse.roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-common.simcse.roberta-large
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8470
- Bleu: 4.8759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bssr138/distilhubert-finetuned-gtzan | bssr138 | 2023-07-06T19:16:25Z | 160 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-06T19:12:26Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.82
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5911
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9683 | 1.0 | 113 | 1.7941 | 0.47 |
| 1.2944 | 2.0 | 226 | 1.1781 | 0.65 |
| 1.031 | 3.0 | 339 | 0.9032 | 0.77 |
| 0.8101 | 4.0 | 452 | 0.7538 | 0.78 |
| 0.6646 | 5.0 | 565 | 0.6414 | 0.83 |
| 0.4015 | 6.0 | 678 | 0.6811 | 0.8 |
| 0.5056 | 7.0 | 791 | 0.5735 | 0.85 |
| 0.172 | 8.0 | 904 | 0.5621 | 0.83 |
| 0.3555 | 9.0 | 1017 | 0.5750 | 0.83 |
| 0.1488 | 10.0 | 1130 | 0.5911 | 0.82 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jinouga/sagiri-yamada-asaemon-v1 | Jinouga | 2023-07-06T19:13:58Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-05T18:27:28Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### sagiri-yamada-asaemon-v1 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
KevinQuijano/model-dreambooth-chair-2.0 | KevinQuijano | 2023-07-06T19:07:15Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T18:13:41Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a sennagamer chair
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - KevinQuijano/model-dreambooth-chair-2.0
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a sennagamer chair using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
BigBri/2_my_awesome_eli5_clm-model | BigBri | 2023-07-06T19:06:15Z | 130 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T18:34:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 2_my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2_my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8718 | 1.0 | 1133 | 3.7563 |
| 3.7741 | 2.0 | 2266 | 3.7410 |
| 3.7327 | 3.0 | 3399 | 3.7367 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
YakovElm/Apache_10_BERT_More_Properties | YakovElm | 2023-07-06T19:02:27Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T19:01:52Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_10_BERT_More_Properties
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_10_BERT_More_Properties
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2335
- Train Accuracy: 0.9383
- Validation Loss: 0.4310
- Validation Accuracy: 0.8644
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2462 | 0.9333 | 0.4480 | 0.8644 | 0 |
| 0.2321 | 0.9383 | 0.4521 | 0.8644 | 1 |
| 0.2335 | 0.9383 | 0.4310 | 0.8644 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
PraveenJesu/openai-whisper-medium-zrx-peft-lora-v2.2.3 | PraveenJesu | 2023-07-06T18:53:33Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T18:53:32Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/eng-kor-common.simcse.roberta-large | hopkins | 2023-07-06T18:50:41Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T18:33:09Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-kor-common.simcse.roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-common.simcse.roberta-large
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9976
- Bleu: 7.2965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
EllaHong/datamap_kullm-polyglot-5.8b_exp2 | EllaHong | 2023-07-06T18:49:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T18:49:37Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
RogerB/afro-xlmr-small-finetuned-kintweetsB | RogerB | 2023-07-06T18:48:30Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T18:33:31Z | ---
license: afl-3.0
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-small-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-small-finetuned-kintweetsB
This model is a fine-tuned version of [Davlan/afro-xlmr-small](https://huggingface.co/Davlan/afro-xlmr-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.808 | 1.0 | 900 | 1.6132 |
| 1.7073 | 2.0 | 1800 | 1.5754 |
| 1.6585 | 3.0 | 2700 | 1.5900 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ibibek/vicuna-13B-gptq-new | ibibek | 2023-07-06T18:48:21Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T18:41:05Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GGML | TheBloke | 2023-07-06T18:39:46Z | 0 | 23 | null | [
"license:other",
"region:us"
] | null | 2023-07-06T18:34:16Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's Wizard Vicuna 7B Uncensored GGML
These files are GGML format model files for [Eric Hartford's Wizard Vicuna 7B Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 wizard-vicuna-7b-uncensored-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Eric Hartford's Wizard Vicuna 7B Uncensored
This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
|
hopkins/eng-guj-common.simcse.roberta-large | hopkins | 2023-07-06T18:37:12Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T18:16:00Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-common.simcse.roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-common.simcse.roberta-large
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2500
- Bleu: 3.2434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RogerB/afro-xlmr-mini-finetuned-kintweetsB | RogerB | 2023-07-06T18:32:09Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T18:16:39Z | ---
license: afl-3.0
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-mini-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-mini-finetuned-kintweetsB
This model is a fine-tuned version of [Davlan/afro-xlmr-mini](https://huggingface.co/Davlan/afro-xlmr-mini) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1709 | 1.0 | 900 | 2.8941 |
| 3.0063 | 2.0 | 1800 | 2.8361 |
| 2.9479 | 3.0 | 2700 | 2.7810 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
BrainTheos/whisper-tiny-en-us | BrainTheos | 2023-07-06T18:27:29Z | 76 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-06T17:32:37Z | ---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-us
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.36046511627906974
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-us
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7173
- Wer Ortho: 36.1373
- Wer: 0.3605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.0007 | 17.86 | 500 | 0.7173 | 36.1373 | 0.3605 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GGML | TheBloke | 2023-07-06T18:25:32Z | 0 | 1 | null | [
"arxiv:1910.09700",
"license:other",
"region:us"
] | null | 2023-07-06T18:19:54Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kevin Pro's Vicuna 7B CoT GGML
These files are GGML format model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kevinpro/Vicuna-7B-CoT)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| vicuna-7b-cot-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| vicuna-7b-cot-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-7b-cot-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-7b-cot-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| vicuna-7b-cot-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| vicuna-7b-cot-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| vicuna-7b-cot-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| vicuna-7b-cot-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| vicuna-7b-cot-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 vicuna-7b-cot-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Kevin Pro's Vicuna 7B CoT
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kevin Pro's Vicuna 7B CoT fp16
These files are pytorch format fp16 model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/kevinpro/Vicuna-7B-CoT).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kevin Pro's Vicuna 7B CoT
# Model Card for Model ID
SFT to enhance the CoT capabiliy of Vicuna
If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section.
Another 13B version: https://huggingface.co/kevinpro/Vicuna-13B-CoT
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aroot/eng-fra-simcse_central_ssblu | aroot | 2023-07-06T18:24:41Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T18:06:23Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_central_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_central_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1498
- Bleu: 31.6893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g040 | jordyvl | 2023-07-06T18:14:21Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-06T16:05:16Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g040
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-06_g040
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1088
- Accuracy: 0.715
- Exit 0 Accuracy: 0.1175
- Exit 1 Accuracy: 0.1575
- Exit 2 Accuracy: 0.3075
- Exit 3 Accuracy: 0.32
- Exit 4 Accuracy: 0.0675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7601 | 0.11 | 0.0825 | 0.0675 | 0.0875 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7329 | 0.115 | 0.07 | 0.065 | 0.115 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6953 | 0.135 | 0.075 | 0.06 | 0.12 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6612 | 0.165 | 0.08 | 0.055 | 0.1225 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6176 | 0.1925 | 0.0875 | 0.0575 | 0.1175 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5681 | 0.2125 | 0.09 | 0.08 | 0.1225 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5380 | 0.2125 | 0.095 | 0.08 | 0.125 | 0.0625 | 0.0625 |
| No log | 7.72 | 16 | 2.5137 | 0.2275 | 0.095 | 0.09 | 0.125 | 0.0625 | 0.0625 |
| No log | 8.72 | 18 | 2.4662 | 0.2775 | 0.095 | 0.0975 | 0.125 | 0.0625 | 0.0625 |
| No log | 9.72 | 20 | 2.4192 | 0.3 | 0.0925 | 0.105 | 0.1275 | 0.0625 | 0.0625 |
| No log | 10.72 | 22 | 2.3755 | 0.3075 | 0.095 | 0.1225 | 0.135 | 0.0625 | 0.0625 |
| No log | 11.72 | 24 | 2.3290 | 0.3225 | 0.0975 | 0.1175 | 0.125 | 0.0625 | 0.0625 |
| No log | 12.72 | 26 | 2.2739 | 0.3375 | 0.1 | 0.115 | 0.125 | 0.0625 | 0.0625 |
| No log | 13.72 | 28 | 2.2219 | 0.3525 | 0.0975 | 0.125 | 0.13 | 0.065 | 0.0625 |
| No log | 14.72 | 30 | 2.1835 | 0.3525 | 0.1 | 0.125 | 0.1475 | 0.065 | 0.0625 |
| No log | 15.72 | 32 | 2.1610 | 0.3725 | 0.1025 | 0.1275 | 0.155 | 0.0675 | 0.0625 |
| No log | 16.72 | 34 | 2.1139 | 0.39 | 0.1025 | 0.135 | 0.1675 | 0.07 | 0.0625 |
| No log | 17.72 | 36 | 2.0748 | 0.405 | 0.1 | 0.1375 | 0.185 | 0.0725 | 0.0625 |
| No log | 18.72 | 38 | 2.0145 | 0.4225 | 0.1025 | 0.14 | 0.1875 | 0.0725 | 0.0625 |
| No log | 19.72 | 40 | 1.9595 | 0.4475 | 0.1025 | 0.145 | 0.185 | 0.0725 | 0.0625 |
| No log | 20.72 | 42 | 1.9077 | 0.4875 | 0.1025 | 0.1425 | 0.18 | 0.085 | 0.0625 |
| No log | 21.72 | 44 | 1.8328 | 0.52 | 0.1025 | 0.145 | 0.185 | 0.11 | 0.0625 |
| No log | 22.72 | 46 | 1.7703 | 0.555 | 0.105 | 0.1425 | 0.185 | 0.1125 | 0.0625 |
| No log | 23.72 | 48 | 1.7462 | 0.565 | 0.11 | 0.1425 | 0.2025 | 0.11 | 0.0625 |
| No log | 24.72 | 50 | 1.6894 | 0.5625 | 0.1125 | 0.14 | 0.205 | 0.12 | 0.0625 |
| No log | 25.72 | 52 | 1.6273 | 0.585 | 0.1125 | 0.1475 | 0.205 | 0.1225 | 0.0625 |
| No log | 26.72 | 54 | 1.5894 | 0.5875 | 0.115 | 0.1475 | 0.21 | 0.1325 | 0.0625 |
| No log | 27.72 | 56 | 1.5567 | 0.605 | 0.115 | 0.1475 | 0.21 | 0.13 | 0.0625 |
| No log | 28.72 | 58 | 1.5013 | 0.6225 | 0.115 | 0.1475 | 0.215 | 0.135 | 0.0625 |
| No log | 29.72 | 60 | 1.4588 | 0.64 | 0.115 | 0.15 | 0.2175 | 0.145 | 0.0625 |
| No log | 30.72 | 62 | 1.4424 | 0.6425 | 0.115 | 0.15 | 0.23 | 0.145 | 0.065 |
| No log | 31.72 | 64 | 1.4074 | 0.65 | 0.115 | 0.1475 | 0.245 | 0.1475 | 0.065 |
| No log | 32.72 | 66 | 1.3663 | 0.6675 | 0.115 | 0.1475 | 0.2475 | 0.17 | 0.065 |
| No log | 33.72 | 68 | 1.3465 | 0.67 | 0.1175 | 0.1475 | 0.26 | 0.17 | 0.065 |
| No log | 34.72 | 70 | 1.3363 | 0.6675 | 0.115 | 0.15 | 0.265 | 0.18 | 0.065 |
| No log | 35.72 | 72 | 1.3183 | 0.67 | 0.1175 | 0.15 | 0.2725 | 0.185 | 0.0625 |
| No log | 36.72 | 74 | 1.2789 | 0.7025 | 0.1175 | 0.1525 | 0.2725 | 0.195 | 0.0625 |
| No log | 37.72 | 76 | 1.2625 | 0.7025 | 0.12 | 0.1525 | 0.2725 | 0.22 | 0.065 |
| No log | 38.72 | 78 | 1.2645 | 0.6875 | 0.12 | 0.1525 | 0.2725 | 0.2325 | 0.065 |
| No log | 39.72 | 80 | 1.2384 | 0.695 | 0.1225 | 0.1525 | 0.275 | 0.24 | 0.065 |
| No log | 40.72 | 82 | 1.2138 | 0.7075 | 0.1225 | 0.1525 | 0.29 | 0.2475 | 0.065 |
| No log | 41.72 | 84 | 1.2041 | 0.6975 | 0.12 | 0.1525 | 0.29 | 0.2475 | 0.065 |
| No log | 42.72 | 86 | 1.1907 | 0.7075 | 0.1175 | 0.1525 | 0.29 | 0.2575 | 0.0625 |
| No log | 43.72 | 88 | 1.1784 | 0.7075 | 0.1175 | 0.1525 | 0.2925 | 0.2675 | 0.0625 |
| No log | 44.72 | 90 | 1.1678 | 0.715 | 0.1175 | 0.1525 | 0.2925 | 0.2875 | 0.0625 |
| No log | 45.72 | 92 | 1.1662 | 0.715 | 0.1175 | 0.155 | 0.295 | 0.285 | 0.0625 |
| No log | 46.72 | 94 | 1.1568 | 0.715 | 0.1175 | 0.155 | 0.295 | 0.2925 | 0.0625 |
| No log | 47.72 | 96 | 1.1497 | 0.715 | 0.1175 | 0.155 | 0.3 | 0.3 | 0.0625 |
| No log | 48.72 | 98 | 1.1456 | 0.715 | 0.1175 | 0.1575 | 0.3 | 0.3025 | 0.065 |
| No log | 49.72 | 100 | 1.1406 | 0.7125 | 0.1175 | 0.1575 | 0.2975 | 0.305 | 0.0675 |
| No log | 50.72 | 102 | 1.1333 | 0.72 | 0.1175 | 0.1575 | 0.2975 | 0.305 | 0.0675 |
| No log | 51.72 | 104 | 1.1242 | 0.7175 | 0.1175 | 0.1575 | 0.2975 | 0.3125 | 0.0675 |
| No log | 52.72 | 106 | 1.1197 | 0.7125 | 0.1175 | 0.1575 | 0.2975 | 0.3125 | 0.0675 |
| No log | 53.72 | 108 | 1.1161 | 0.715 | 0.1175 | 0.1575 | 0.3 | 0.3125 | 0.0675 |
| No log | 54.72 | 110 | 1.1114 | 0.715 | 0.1175 | 0.1575 | 0.3075 | 0.3125 | 0.0675 |
| No log | 55.72 | 112 | 1.1096 | 0.715 | 0.1175 | 0.1575 | 0.315 | 0.32 | 0.0675 |
| No log | 56.72 | 114 | 1.1084 | 0.715 | 0.1175 | 0.1575 | 0.3125 | 0.32 | 0.0675 |
| No log | 57.72 | 116 | 1.1085 | 0.715 | 0.1175 | 0.1575 | 0.3075 | 0.32 | 0.0675 |
| No log | 58.72 | 118 | 1.1089 | 0.7125 | 0.1175 | 0.1575 | 0.3075 | 0.32 | 0.0675 |
| No log | 59.72 | 120 | 1.1088 | 0.715 | 0.1175 | 0.1575 | 0.3075 | 0.32 | 0.0675 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GGML | TheBloke | 2023-07-06T18:12:35Z | 0 | 4 | null | [
"arxiv:2302.13971",
"arxiv:2306.05685",
"license:other",
"region:us"
] | null | 2023-07-06T18:06:25Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# LmSys' Vicuna 7B v1.3 GGML
These files are GGML format model files for [LmSys' Vicuna 7B v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-7b-v1.3)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| vicuna-7b-v1.3-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 vicuna-7b-v1.3-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: LmSys' Vicuna 7B v1.3
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
Panchovix/GPlatty-30B-PI-8192-LoRA-4bit-32g | Panchovix | 2023-07-06T18:11:21Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-04T02:52:33Z | ---
license: other
---
[GPlatty-30B](https://huggingface.co/lilloukas/GPlatty-30B) merged with bhenrym14's [airoboros-33b-gpt4-1.4.1-PI-8192-LoRA](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-LoRA), quantized at 4 bit.
More info about the LoRA [Here](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16). This is an alternative to SuperHOT 8k LoRA trained with LoRA_rank 64, and airoboros 1.4.1 dataset.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use compress_pos_emb = 4 for any context up to 8192 context.
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21 |
aroot/eng-fra-simcse_random_ssblu | aroot | 2023-07-06T18:11:02Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T17:52:40Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random_ssblu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random_ssblu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1512
- Bleu: 31.7456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k-4bit-32g | Panchovix | 2023-07-06T18:09:53Z | 4 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-26T04:45:45Z | ---
license: other
---
[WizardLM-33B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use compress_pos_emb = 4 for any context up to 8192 context.
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21 |
Panchovix/WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k-4bit-32g | Panchovix | 2023-07-06T18:09:47Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-26T22:50:20Z | ---
license: other
---
[WizardLM-Uncensored-SuperCOT-StoryTelling-30b](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit.
It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
I HIGHLY suggest to use exllama, to evade some VRAM issues.
Use compress_pos_emb = 4 for any context up to 8192 context.
If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:
gpu_split: 9,21 |
KevinQuijano/model-dreambooth-chair-2 | KevinQuijano | 2023-07-06T18:04:33Z | 29 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T17:37:58Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a sennagamer chair
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - KevinQuijano/model-dreambooth-chair-2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a sennagamer chair using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
RogerB/afro-xlmr-large-finetuned-kintweetsB | RogerB | 2023-07-06T18:03:53Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-06T17:18:14Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-finetuned-kintweetsB
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1027 | 1.0 | 3000 | 1.9350 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/eng-fra-common.simcse.roberta-large | hopkins | 2023-07-06T18:02:47Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T17:44:05Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-common.simcse.roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-common.simcse.roberta-large
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1339
- Bleu: 33.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-deu-common.simcse.roberta-large | hopkins | 2023-07-06T17:51:37Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T17:37:45Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-common.simcse.roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-common.simcse.roberta-large
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6605
- Bleu: 21.3413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-mya-common | hopkins | 2023-07-06T17:48:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T17:28:32Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-common
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-common
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8424
- Bleu: 4.9087
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mucktiymuck/treacefalcon-instruct | mucktiymuck | 2023-07-06T17:46:53Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"coreml",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T17:44:21Z | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"
example_title: "Abu Dhabi Trip"
- text: "What's the Everett interpretation of quantum mechanics?"
example_title: "Q/A: Quantum & Answers"
- text: "Give me a list of the top 10 dive sites you would recommend around the world."
example_title: "Diving Top 10"
- text: "Can you tell me more about deep-water soloing?"
example_title: "Extreme sports"
- text: "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?"
example_title: "Twitter Helper"
- text: "What are the responsabilities of a Chief Llama Officer?"
example_title: "Trendy Jobs"
license: apache-2.0
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
[email protected] |
makaveli10/Reinforce-PixelCopter | makaveli10 | 2023-07-06T17:39:40Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-06T17:39:36Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.90 +/- 36.63
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aroot/eng-mya-simcse_central_usbbu | aroot | 2023-07-06T17:33:28Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T17:12:31Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-simcse_central_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-simcse_central_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9021
- Bleu: 3.9804
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-kor-common | hopkins | 2023-07-06T17:33:00Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T17:15:18Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-kor-common
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-common
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9972
- Bleu: 7.4980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zwich07/3dmm | zwich07 | 2023-07-06T17:26:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-06T17:25:03Z | ---
license: creativeml-openrail-m
---
|
i617/falcon-n-adapt1 | i617 | 2023-07-06T17:22:32Z | 6 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-06-21T07:41:15Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/eng-guj-common | hopkins | 2023-07-06T17:09:07Z | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T16:47:42Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-common
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-common
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2333
- Bleu: 2.9427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Seungjun/GSOCt5-small-finetuned-t5_V2 | Seungjun | 2023-07-06T17:02:16Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-02T07:42:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: GSOCt5-small-finetuned-t5_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSOCt5-small-finetuned-t5_V2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6585
- Rouge1: 35.6178
- Rouge2: 26.3831
- Rougel: 33.0833
- Rougelsum: 34.2411
- Gen Len: 18.9393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.8214 | 1.0 | 631 | 0.6848 | 35.363 | 26.0631 | 32.7693 | 33.9591 | 18.9571 |
| 0.6835 | 2.0 | 1262 | 0.6585 | 35.6178 | 26.3831 | 33.0833 | 34.2411 | 18.9393 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
manosp/textual_inversion_cat | manosp | 2023-07-06T17:00:54Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T12:44:05Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - manosp/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




|
TheBloke/Robin-7B-v2-SuperHOT-8K-GGML | TheBloke | 2023-07-06T16:53:54Z | 0 | 3 | null | [
"license:other",
"region:us"
] | null | 2023-07-06T16:48:02Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 GGML
These files are GGML format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/TheBloke/robin-7B-v2-fp16).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OptimalScale/robin-7b-v2-delta)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| robin-7b-v2-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| robin-7b-v2-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b-v2-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| robin-7b-v2-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| robin-7b-v2-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| robin-7b-v2-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| robin-7b-v2-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| robin-7b-v2-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| robin-7b-v2-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 robin-7b-v2-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: OptimalScale's Robin 7B v2
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 fp16
These files are pytorch format fp16 model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 7B v2
No model card provided in source repository.
|
aroot/eng-guj-simcse_random_usbbu | aroot | 2023-07-06T16:38:57Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T16:17:37Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-simcse_random_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-simcse_random_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2985
- Bleu: 2.6375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kroai/Kro-RVC-V2 | kroai | 2023-07-06T16:38:56Z | 0 | 1 | null | [
"license:openrail",
"region:us"
] | null | 2023-06-26T08:44:27Z | ---
license: openrail
---
No need to credit me! If you use one of my models, send a link my way! I'd love to check out what you make with it.
Enjoy! |
hopkins/eng-deu-common | hopkins | 2023-07-06T16:32:47Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T16:14:33Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-deu-common
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-deu-common
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6565
- Bleu: 21.1959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BigBri/my_awesome_eli5_clm-model | BigBri | 2023-07-06T16:27:13Z | 184 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-06T15:52:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.878 | 1.0 | 1135 | 3.7288 |
| 3.7835 | 2.0 | 2270 | 3.7118 |
| 3.7358 | 3.0 | 3405 | 3.7081 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/Koala-13B-SuperHOT-8K-GGML | TheBloke | 2023-07-06T16:16:54Z | 0 | 2 | null | [
"license:other",
"region:us"
] | null | 2023-07-06T16:06:23Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Koala 13B GGML
These files are GGML format model files for [Koala 13B](https://huggingface.co/TheBloke/koala-13b-HF).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Koala-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Koala-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Koala-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/young-geng/koala)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| koala-13b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| koala-13b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| koala-13b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| koala-13b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| koala-13b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| koala-13b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| koala-13b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| koala-13b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| koala-13b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 koala-13b-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Koala 13B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Koala: A Dialogue Model for Academic Research
This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model.
This version has then been converted to HF format.
## My Koala repos
I have the following Koala model repositories available:
**13B models:**
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
* [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GGML)
**7B models:**
* [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
* [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
* [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML)
## How the Koala delta weights were merged
The Koala delta weights were merged using the following commands:
```
git clone https://github.com/young-geng/EasyLM
git clone https://huggingface.co/TheBloke/llama-13b
mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2
cd EasyLM
PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_torch_to_easylm \
--checkpoint_dir=/content/llama-13b \
--output_file=/content/llama-13b-LM \
--streaming=True
PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.scripts.diff_checkpoint --recover_diff=True \
--load_base_checkpoint='params::/content/llama-13b-LM' \
--load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \
--output_file=/content/koala_13b.diff.weights \
--streaming=True
PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \
--output_dir=/content/koala-13B-HF \
--load_checkpoint='params::/content/koala_13b.diff.weights' \
--tokenizer_path=/content/llama-13b/tokenizer.model
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
## Further info
Check out the following links to learn more about the Berkeley Koala model.
* [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
* [Online demo](https://koala.lmsys.org/)
* [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM)
* [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md)
## License
The model weights are intended for academic research only, subject to the
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|
aroot/eng-fra-simcse_central_usbbu | aroot | 2023-07-06T16:12:08Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T15:56:11Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_central_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_central_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1571
- Bleu: 32.0309
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
awadelewis/distilbert-base-uncased-finetuned-emotion | awadelewis | 2023-07-06T16:11:10Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-07-06T16:11:10Z | ---
license: cc-by-nc-sa-4.0
---
|
AustinCarthy/Benign10MGPT2_domain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63 | AustinCarthy | 2023-07-06T16:05:12Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-06T13:54:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_domain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_domain_100KP_BFall_fromB_90K_topP_0.75_ratio2.63
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_benign_95K_top_p_0.75domain dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1054
- Accuracy: 0.9794
- F1: 0.8143
- Precision: 0.7147
- Recall: 0.9462
- Roc Auc Score: 0.9637
- Tpr At Fpr 0.01: 0.6968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.1248 | 1.0 | 21554 | 0.0671 | 0.9812 | 0.8187 | 0.7579 | 0.8902 | 0.9380 | 0.7268 |
| 0.1017 | 2.0 | 43108 | 0.0643 | 0.9816 | 0.8301 | 0.7394 | 0.9462 | 0.9648 | 0.7754 |
| 0.0777 | 3.0 | 64662 | 0.0640 | 0.9827 | 0.8379 | 0.7574 | 0.9376 | 0.9613 | 0.7482 |
| 0.058 | 4.0 | 86216 | 0.0830 | 0.9812 | 0.8281 | 0.7337 | 0.9504 | 0.9666 | 0.7248 |
| 0.0375 | 5.0 | 107770 | 0.1054 | 0.9794 | 0.8143 | 0.7147 | 0.9462 | 0.9637 | 0.6968 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aroot/eng-fra-simcse_random_usbbu | aroot | 2023-07-06T16:04:11Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T15:45:49Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random_usbbu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random_usbbu
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1521
- Bleu: 31.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hopkins/eng-mya-longest | hopkins | 2023-07-06T15:56:41Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T15:35:44Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-longest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-longest
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7988
- Bleu: 4.7722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Abinaya/opt-1.3b-lora-summaryv2 | Abinaya | 2023-07-06T15:56:35Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-06T15:56:33Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
hopkins/eng-mya-random | hopkins | 2023-07-06T15:51:04Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-06T15:34:41Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-random
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-random
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8560
- Bleu: 4.6640
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TheBloke/Baize-v2-7B-SuperHOT-8K-GGML | TheBloke | 2023-07-06T15:40:36Z | 0 | 2 | null | [
"arxiv:2304.01196",
"license:other",
"region:us"
] | null | 2023-07-06T15:03:12Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Project Baize's Baize 7B v2 GGML
These files are GGML format model files for [Project Baize's Baize 7B v2](https://huggingface.co/project-baize/baize-v2-7b).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/project-baize/baize-v2-7b)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| baize-7b-v2-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| baize-7b-v2-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baize-7b-v2-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| baize-7b-v2-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| baize-7b-v2-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| baize-7b-v2-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| baize-7b-v2-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| baize-7b-v2-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| baize-7b-v2-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 baize-7b-v2-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Project Baize's Baize 7B v2
<p align="center">
<img width="500px" alt="Project Baize" src="https://user-images.githubusercontent.com/22514219/229195563-0cddfa74-e52f-4413-b4b4-e4ba489c4b3d.png">
</p>
<hr>
## ⚠️Warning
Using Baize checkpoints directly without the following format will not work.
```
The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi!
```
`[|Human|]` and `[|AI|]` are required to mark the messages from the user and Baize. We recommend checking out our [GitHub](https://github.com/project-baize/baize) to find the best way to use Baize with our demo or Fastchat.
## Demo
https://huggingface.co/spaces/project-baize/chat-with-baize
## What's Baize?
Baize is an open-source chat model fine-tuned with [LoRA](https://github.com/microsoft/LoRA). This model is a **7B Baize-v2**, trained with supervised fine-tuning (SFT) and self-distillation with feedback (SDF). This checkpoint has been merged with LLaMA so it's ready for use.
## Why it's called Baize?
Baize (白泽) is a mythical creature in Chinese folklore, who speaks human languages and knows everything. This is exactly what we expect from a chat model.
## How to use it: local demo, API and SDK
More details can be found in the Baize [GitHub](https://github.com/project-baize/baize) and [Paper](https://arxiv.org/abs/2304.01196).
|
jpohhhh/MiniLM-L6-v2-optimum-embeddings | jpohhhh | 2023-07-06T15:40:13Z | 8 | 0 | generic | [
"generic",
"onnx",
"bert",
"sentence-embeddings",
"endpoints-template",
"optimum",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-07-06T15:03:26Z | ---
license: mit
tags:
- sentence-embeddings
- endpoints-template
- optimum
library_name: generic
---
This repository is a fork of philschmid/all-MiniLM-L6-v2-optimum-embeddings.
My own ONNX conversion seems to be about 4x slower, no discernable reason why: the quantized models seem roughly the same.
The idea here is by forking we can ex. upgrade the Optimum lib used as well.
|
keonju/BERTopic | keonju | 2023-07-06T15:36:42Z | 22 | 2 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | 2023-06-22T16:09:55Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("keonju/BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 158
* Number of training documents: 10158
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | and - the - of - in - to | 10 | -1_and_the_of_in |
| 0 | holocene - china - the - monsoon - bp | 3858 | 0_holocene_china_the_monsoon |
| 1 | energy - biofuels - production - biodiesel - bioenergy | 291 | 1_energy_biofuels_production_biodiesel |
| 2 | coal - coals - the - basin - seams | 248 | 2_coal_coals_the_basin |
| 3 | yr - holocene - the - bp - and | 205 | 3_yr_holocene_the_bp |
| 4 | hg - mercury - mehg - of hg - in | 202 | 4_hg_mercury_mehg_of hg |
| 5 | ch4 - methane - emissions - fluxes - flux | 159 | 5_ch4_methane_emissions_fluxes |
| 6 | data - forest - spectral - for - mapping | 118 | 6_data_forest_spectral_for |
| 7 | bp - the - holocene - pollen - lake | 116 | 7_bp_the_holocene_pollen |
| 8 | wetlands - wetland - and - are - of | 104 | 8_wetlands_wetland_and_are |
| 9 | co2 - ecosystem - nee - exchange - net | 103 | 9_co2_ecosystem_nee_exchange |
| 10 | species - of - fen - the - restoration | 100 | 10_species_of_fen_the |
| 11 | peat - tropical - peatlands - palm - peatland | 98 | 11_peat_tropical_peatlands_palm |
| 12 | pb - lead - atmospheric - metal - deposition | 96 | 12_pb_lead_atmospheric_metal |
| 13 | the - lake - of the - of - poland | 93 | 13_the_lake_of the_of |
| 14 | pm2 - haze - burning - air - aerosol | 90 | 14_pm2_haze_burning_air |
| 15 | doc - catchments - carbon - organic carbon - export | 88 | 15_doc_catchments_carbon_organic carbon |
| 16 | the - carbon - of - co2 - of the | 73 | 16_the_carbon_of_co2 |
| 17 | wetland - wetlands - classification - mapping - and | 69 | 17_wetland_wetlands_classification_mapping |
| 18 | uv - ozone - o3 - isoprene - elevated | 67 | 18_uv_ozone_o3_isoprene |
| 19 | mediterranean - the - glacial - iberian - during | 66 | 19_mediterranean_the_glacial_iberian |
| 20 | media - compost - growing media - growing - biochar | 63 | 20_media_compost_growing media_growing |
| 21 | 137cs - of 137cs - sup - ce sup - radiocaesium | 63 | 21_137cs_of 137cs_sup_ce sup |
| 22 | testate - amoebae - testate amoebae - of testate - amoeba | 62 | 22_testate_amoebae_testate amoebae_of testate |
| 23 | peat - pyrolysis - lignin - gc - of | 62 | 23_peat_pyrolysis_lignin_gc |
| 24 | cu - zn - metals - peat - elements | 62 | 24_cu_zn_metals_peat |
| 25 | alkanes - alkane - chain - values - plants | 61 | 25_alkanes_alkane_chain_values |
| 26 | permafrost - active layer - thermal - ground - layer | 60 | 26_permafrost_active layer_thermal_ground |
| 27 | streams - diatom - species - macroinvertebrate - stream | 60 | 27_streams_diatom_species_macroinvertebrate |
| 28 | records - the - of - record - ireland | 60 | 28_records_the_of_record |
| 29 | water - flow - groundwater - recharge - runoff | 59 | 29_water_flow_groundwater_recharge |
| 30 | habitat - species - breeding - bird - nest | 57 | 30_habitat_species_breeding_bird |
| 31 | brgdgts - gdgts - glycerol - brgdgt - branched | 56 | 31_brgdgts_gdgts_glycerol_brgdgt |
| 32 | deposition - nitrogen - nitrogen deposition - sphagnum - of | 55 | 32_deposition_nitrogen_nitrogen deposition_sphagnum |
| 33 | oil sands - sands - fen - oil - reclamation | 54 | 33_oil sands_sands_fen_oil |
| 34 | fire - burned - severity - burning - post fire | 54 | 34_fire_burned_severity_burning |
| 35 | acidification - deposition - acid - ph - catchment | 54 | 35_acidification_deposition_acid_ph |
| 36 | farm - land - agricultural - farmers - policy | 53 | 36_farm_land_agricultural_farmers |
| 37 | cdom - doc - dom - dissolved organic - dissolved | 53 | 37_cdom_doc_dom_dissolved organic |
| 38 | redd - indonesia - deforestation - in indonesia - forest | 50 | 38_redd_indonesia_deforestation_in indonesia |
| 39 | ash - wood ash - wood - growth - of wood | 49 | 39_ash_wood ash_wood_growth |
| 40 | fungal - fungi - mycorrhizal - species - root | 49 | 40_fungal_fungi_mycorrhizal_species |
| 41 | stand - growth - models - tree - stands | 49 | 41_stand_growth_models_tree |
| 42 | smouldering - smoldering - spread - peat - combustion | 49 | 42_smouldering_smoldering_spread_peat |
| 43 | pollen - of pollen - vegetation - of - from | 49 | 43_pollen_of pollen_vegetation_of |
| 44 | arsenic - as - of as - fe - of arsenic | 49 | 44_arsenic_as_of as_fe |
| 45 | ch4 - methane - production - peat - methanogenesis | 47 | 45_ch4_methane_production_peat |
| 46 | africa - the - bp - south - late | 46 | 46_africa_the_bp_south |
| 47 | soc - carbon - soil - stocks - land | 45 | 47_soc_carbon_soil_stocks |
| 48 | soil - organic - carbon - soil organic - soils | 45 | 48_soil_organic_carbon_soil organic |
| 49 | wetlands - constructed - wetland - treatment - phosphorus | 43 | 49_wetlands_constructed_wetland_treatment |
| 50 | microbial - rare - soil - bacterial - diversity | 43 | 50_microbial_rare_soil_bacterial |
| 51 | litter - decomposition - mass loss - litter decomposition - mass | 39 | 51_litter_decomposition_mass loss_litter decomposition |
| 52 | co2 - pco2 - emissions - carbon - ch4 | 39 | 52_co2_pco2_emissions_carbon |
| 53 | soc - carbon - wetland - wetlands - soil | 39 | 53_soc_carbon_wetland_wetlands |
| 54 | countries - emissions - emission - to - climate | 38 | 54_countries_emissions_emission_to |
| 55 | services - ecosystem - ecosystem services - es - pes | 37 | 55_services_ecosystem_ecosystem services_es |
| 56 | catalyst - peat - pyrolysis - char - catalysts | 37 | 56_catalyst_peat_pyrolysis_char |
| 57 | clearfelling - water - phosphorus - buffer - nutrient | 35 | 57_clearfelling_water_phosphorus_buffer |
| 58 | forest - forests - trees - tree - stands | 35 | 58_forest_forests_trees_tree |
| 59 | carbon - climate - atmosphere - earth - carbon cycle | 34 | 59_carbon_climate_atmosphere_earth |
| 60 | tephra - volcanic - cryptotephra - eruptions - tephras | 34 | 60_tephra_volcanic_cryptotephra_eruptions |
| 61 | testate - arcellinida - coi - species - amoebae | 34 | 61_testate_arcellinida_coi_species |
| 62 | methane - methanogenic - community - methanogen - methanogens | 34 | 62_methane_methanogenic_community_methanogen |
| 63 | consolidation - soil - embankment - road - the | 33 | 63_consolidation_soil_embankment_road |
| 64 | species - spider - bogs - spiders - habitat | 33 | 64_species_spider_bogs_spiders |
| 65 | evaporation - energy - model - was - the | 33 | 65_evaporation_energy_model_was |
| 66 | phosphorus - catchment - in - tp - concentrations | 33 | 66_phosphorus_catchment_in_tp |
| 67 | co2 - ch4 - marsh - wetland - emissions | 33 | 67_co2_ch4_marsh_wetland |
| 68 | runoff - peat - channels - flow - catchment | 33 | 68_runoff_peat_channels_flow |
| 69 | nutrient - nitrogen - fertilizer - litter - of | 32 | 69_nutrient_nitrogen_fertilizer_litter |
| 70 | brazil - bp - the - of - in the | 31 | 70_brazil_bp_the_of |
| 71 | tsunami - holocene - the - volcanic - deposits | 30 | 71_tsunami_holocene_the_volcanic |
| 72 | climate change - change - climate - biodiversity - ecosystem | 30 | 72_climate change_change_climate_biodiversity |
| 73 | gpr - resistivity - radar - penetrating - penetrating radar | 29 | 73_gpr_resistivity_radar_penetrating |
| 74 | holocene - the - andes - and - bp | 29 | 74_holocene_the_andes_and |
| 75 | permafrost - soc - soil - soils - arctic | 28 | 75_permafrost_soc_soil_soils |
| 76 | policy - forest - owners - arguments - forest owners | 28 | 76_policy_forest_owners_arguments |
| 77 | bog - poland - peatland - europe - ca | 28 | 77_bog_poland_peatland_europe |
| 78 | ch4 - oxidation - methane - paddy - aom | 28 | 78_ch4_oxidation_methane_paddy |
| 79 | enzyme - enzymes - eea - soil - activities | 28 | 79_enzyme_enzymes_eea_soil |
| 80 | channel - catchment - flow - bends - model | 28 | 80_channel_catchment_flow_bends |
| 81 | soil - soil science - science - of soil - eu | 27 | 81_soil_soil science_science_of soil |
| 82 | pahs - pah - polycyclic aromatic - polycyclic - aromatic | 27 | 82_pahs_pah_polycyclic aromatic_polycyclic |
| 83 | n2o - n2o emissions - emissions - emission - nitrous | 26 | 83_n2o_n2o emissions_emissions_emission |
| 84 | peat water - adsorption - electrocoagulation - brackish peat - brackish peat water | 26 | 84_peat water_adsorption_electrocoagulation_brackish peat |
| 85 | mangrove - mangroves - carbon - coastal - b2 | 26 | 85_mangrove_mangroves_carbon_coastal |
| 86 | species - retention - alien - richness - forests | 25 | 86_species_retention_alien_richness |
| 87 | colloidal - river - elements - fe - colloids | 25 | 87_colloidal_river_elements_fe |
| 88 | sulfate - sulfur - 34s - peat - sulphur | 24 | 88_sulfate_sulfur_34s_peat |
| 89 | caribou - habitat - woodland caribou - populations - wolf | 24 | 89_caribou_habitat_woodland caribou_populations |
| 90 | food - agriculture - food system - change - covid 19 | 24 | 90_food_agriculture_food system_change |
| 91 | microbial - community - microbial community - communities - bacterial | 23 | 91_microbial_community_microbial community_communities |
| 92 | sorption - cu - ions - ii - cu ii | 22 | 92_sorption_cu_ions_ii |
| 93 | fire - fires - algorithm - frp - hotspot | 22 | 93_fire_fires_algorithm_frp |
| 94 | choice - wtp - preferences - valuation - choice experiment | 22 | 94_choice_wtp_preferences_valuation |
| 95 | nematodes - earthworm - soil - food - nematode | 22 | 95_nematodes_earthworm_soil_food |
| 96 | conservation - orangutan - habitat - forest - species | 21 | 96_conservation_orangutan_habitat_forest |
| 97 | cushion - accumulation - peat - amazonian - vegetation | 21 | 97_cushion_accumulation_peat_amazonian |
| 98 | ch4 - oxidation - ch4 oxidation - uptake - ch4 uptake | 20 | 98_ch4_oxidation_ch4 oxidation_uptake |
| 99 | tidal - sediment - coastal - delta - the | 20 | 99_tidal_sediment_coastal_delta |
| 100 | emissions - co2 - ghg - n2o - table | 20 | 100_emissions_co2_ghg_n2o |
| 101 | methane - ph - cytochrome - methanotrophs - acetic acid | 20 | 101_methane_ph_cytochrome_methanotrophs |
| 102 | patterns - model - self organization - evolutionary - self | 20 | 102_patterns_model_self organization_evolutionary |
| 103 | nitrogen - denitrification - n2o - soil - n2 | 20 | 103_nitrogen_denitrification_n2o_soil |
| 104 | birch - rotation - biomass - buds - biomass production | 19 | 104_birch_rotation_biomass_buds |
| 105 | fire - wildfire - fires - wildfires - health | 19 | 105_fire_wildfire_fires_wildfires |
| 106 | grazing - heathland - heather - moorland - england | 19 | 106_grazing_heathland_heather_moorland |
| 107 | emissions - fire - burning - fire emissions - biomass burning | 19 | 107_emissions_fire_burning_fire emissions |
| 108 | peat - landslides - failure - of peat - peat compaction | 18 | 108_peat_landslides_failure_of peat |
| 109 | biochar - straw - soil - fe - bc | 18 | 109_biochar_straw_soil_fe |
| 110 | ecosystem - respiration - carbon - ecosystem respiration - meadow | 17 | 110_ecosystem_respiration_carbon_ecosystem respiration |
| 111 | wetland - wetlands - risk - of wetland - the wetland | 17 | 111_wetland_wetlands_risk_of wetland |
| 112 | dom - thm - groundwater - molecular - organic | 17 | 112_dom_thm_groundwater_molecular |
| 113 | geochemistry - landscape geochemistry - rocks - peat - mafic | 17 | 113_geochemistry_landscape geochemistry_rocks_peat |
| 114 | tundra - ch4 - n2o - fluxes - antarctic | 16 | 114_tundra_ch4_n2o_fluxes |
| 115 | cellulose - sphagnum - isotopic - isotope - δ18ocel | 16 | 115_cellulose_sphagnum_isotopic_isotope |
| 116 | solute - transport - chloride - peat - pore | 16 | 116_solute_transport_chloride_peat |
| 117 | charcoal - fire - fires - holocene - fire history | 15 | 117_charcoal_fire_fires_holocene |
| 118 | ghg - agricultural - dairy - abatement - emissions | 15 | 118_ghg_agricultural_dairy_abatement |
| 119 | palm - oil - palm oil - sustainability - industry | 15 | 119_palm_oil_palm oil_sustainability |
| 120 | humic - humic substances - substances - acids - fluorescence | 15 | 120_humic_humic substances_substances_acids |
| 121 | canopy - ndvi - pri - lue - phenological | 15 | 121_canopy_ndvi_pri_lue |
| 122 | pollen - bog - peat - the - human impact | 15 | 122_pollen_bog_peat_the |
| 123 | marshes - tidal - marshes are - salt - or | 15 | 123_marshes_tidal_marshes are_salt |
| 124 | soil - prediction - mapping - covariates - dsm | 15 | 124_soil_prediction_mapping_covariates |
| 125 | si - of si - silicon - biogenic - protozoic | 14 | 125_si_of si_silicon_biogenic |
| 126 | et - evapotranspiration - le - wetland - rice | 14 | 126_et_evapotranspiration_le_wetland |
| 127 | forest - finland - forests - stock - management | 14 | 127_forest_finland_forests_stock |
| 128 | iodine - 129i - sorption - iodide - the sorption | 14 | 128_iodine_129i_sorption_iodide |
| 129 | palm - oil - palm oil - smallholders - certification | 14 | 129_palm_oil_palm oil_smallholders |
| 130 | dndc - model - models - soil - carbon | 14 | 130_dndc_model_models_soil |
| 131 | snow - thaw - cover - sca - data | 14 | 131_snow_thaw_cover_sca |
| 132 | stx2 - microbiota - gut - gut microbiota - microbial | 13 | 132_stx2_microbiota_gut_gut microbiota |
| 133 | dom - doc - organic - dissolved organic - of dom | 13 | 133_dom_doc_organic_dissolved organic |
| 134 | forest - cbm - ontario - cfs3 - cbm cfs3 | 13 | 134_forest_cbm_ontario_cfs3 |
| 135 | wind - wind farms - farms - onshore - onshore wind | 13 | 135_wind_wind farms_farms_onshore |
| 136 | uranium - of uranium - 232th - th - ar | 13 | 136_uranium_of uranium_232th_th |
| 137 | groundwater - springs - spring - gdes - discharge | 13 | 137_groundwater_springs_spring_gdes |
| 138 | fire - forest - boreal - burned - fires | 13 | 138_fire_forest_boreal_burned |
| 139 | metal - metals - cd - sediments - zn | 13 | 139_metal_metals_cd_sediments |
| 140 | slr - sea level - coastal - sea - sea level rise | 13 | 140_slr_sea level_coastal_sea |
| 141 | damo - methane - anaerobic - oxidation - aom | 12 | 141_damo_methane_anaerobic_oxidation |
| 142 | temperature - microbial - soil - co2 - pd | 12 | 142_temperature_microbial_soil_co2 |
| 143 | soil - respiration - root - soil respiration - enchytraeid | 12 | 143_soil_respiration_root_soil respiration |
| 144 | kerp - fusiformisporites - permian - genus - flora | 11 | 144_kerp_fusiformisporites_permian_genus |
| 145 | dust - dust deposition - dust sources - deposition - atmospheric dust | 11 | 145_dust_dust deposition_dust sources_deposition |
| 146 | methane - sources - ch4 - les - de | 11 | 146_methane_sources_ch4_les |
| 147 | n2o - n2o emissions - emissions - permafrost - n2o fluxes | 11 | 147_n2o_n2o emissions_emissions_permafrost |
| 148 | australia - mis - record - ka - crater | 11 | 148_australia_mis_record_ka |
| 149 | oc - fjords - fjord - lakes - of oc | 10 | 149_oc_fjords_fjord_lakes |
| 150 | fe - reduction - fe iii - sr10 - iron | 10 | 150_fe_reduction_fe iii_sr10 |
| 151 | loading - eutrophication - nitrogen - coastal - phytoplankton | 10 | 151_loading_eutrophication_nitrogen_coastal |
| 152 | model - wetlands - groundwater - water - the wetlands | 10 | 152_model_wetlands_groundwater_water |
| 153 | co2 - soil - co2 efflux - soil co2 efflux - soil co2 | 10 | 153_co2_soil_co2 efflux_soil co2 efflux |
| 154 | transfer - transfer functions - transfer function - testate - functions | 10 | 154_transfer_transfer functions_transfer function_testate |
| 155 | peat - spain - bog - matter - autofluorescent | 10 | 155_peat_spain_bog_matter |
| 156 | isbas - insar - subsidence - motion - deformation | 10 | 156_isbas_insar_subsidence_motion |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 3)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 30
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
josero23/rutt-3 | josero23 | 2023-07-06T15:26:38Z | 1 | 0 | diffusers | [
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-06T15:19:29Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### rutt_3 Dreambooth model trained by josero23 with TheLastBen's fast-DreamBooth notebook
|
Subsets and Splits