modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 12:28:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
462 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 12:26:48
card
stringlengths
11
1.01M
JacquesVlaming/distilgpt2-finetuned-wikitext2
JacquesVlaming
2023-07-07T15:21:18Z
203
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T05:53:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikitext model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the wikitext dataset. It achieves the following results on the evaluation set: - Loss: 3.6441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7667 | 1.0 | 2334 | 3.6684 | | 3.6383 | 2.0 | 4668 | 3.6468 | | 3.5906 | 3.0 | 7002 | 3.6441 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.13.3
namedotpg/a2c-AntBulletEnv-v0
namedotpg
2023-07-07T15:17:41Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T15:16:29Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1335.40 +/- 398.54 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
xunnylee/nagitoKomaeda
xunnylee
2023-07-07T15:10:03Z
0
0
null
[ "license:openrail", "region:us" ]
null
2023-07-07T15:08:31Z
--- license: openrail --- hi! thank you for using my model! please credit me @xunnylee on youtube and/or discord if you use it! enjoy! :D
rodrigoclira/a2c-PandaReachDense-v2
rodrigoclira
2023-07-07T15:03:07Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T15:00:27Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.00 +/- 0.53 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SarielSinLuo/bert-large-uncased-finetuned-cola
SarielSinLuo
2023-07-07T15:02:41Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T14:54:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: bert-large-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.6382762835780119 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-cola This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7511 - Matthews Correlation: 0.6383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.473 | 1.0 | 535 | 0.4860 | 0.5855 | | 0.2792 | 2.0 | 1070 | 0.4821 | 0.5986 | | 0.1859 | 3.0 | 1605 | 0.6926 | 0.6381 | | 0.119 | 4.0 | 2140 | 0.7511 | 0.6383 | | 0.0631 | 5.0 | 2675 | 0.8702 | 0.6258 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
digiplay/PrefixFantasyMix_v1
digiplay
2023-07-07T14:55:07Z
368
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T14:07:16Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/104681/prefix-realistic-mix
Sympan/DeepQ_Atari_SpaceInv4_v1
Sympan
2023-07-07T14:47:09Z
3
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T14:46:32Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 649.50 +/- 258.67 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sympan -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sympan -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sympan ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
WSH032/wd-v1-4-tagger-feature-extractor
WSH032
2023-07-07T14:43:18Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2023-07-07T10:05:09Z
--- license: apache-2.0 --- # Credit Jul 7th, 2023\ modified from [SmilingWolf/wd-v1-4-moat-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-moat-tagger-v2)\ `sha256==8452cddf280b952281b6e102411c50e981cb2908` I use this model for image feature extraction to cluster images [https://github.com/WSH032/image-deduplicate-cluster-webui](https://github.com/WSH032/image-deduplicate-cluster-webui) # What did I do? I adjusted the output of the model to the last four layers.\ And change keras to onnx. --- # Env Tools here [https://github.com/WSH032/wd-v1-4-tagger-feature-extractor-tutorials](https://github.com/WSH032/wd-v1-4-tagger-feature-extractor-tutorials) Thanks to Colab ```shell onnx == 1.14.0 tf2onnx == 1.14.0 tensorflow == 2.12.0 ``` --- # Detail ## Detail about model [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/WSH032/wd-v1-4-tagger-feature-extractor-tutorials/blob/main/wd14_tf2onnx.ipynb) ```toml # modified from "SmilingWolf/wd-v1-4-moat-tagger-v2" # 8452cddf280b952281b6e102411c50e981cb2908 # 输入 ['input_1'] # 输出 ['predictions_sigmoid', 'predictions_dense', 'predictions_norm', 'predictions_globalavgpooling'] # 最左边是最外层 [[input]] name = "input_1" # 原始模型就有 shape = [ "None", 448, 448, 3,] dtype = "float32" [[output]] name = "predictions_sigmoid" # 原始模型就有 shape = [ "None", 9083,] dtype = "float32" [[output]] name = "predictions_dense" shape = [ "None", 9083,] dtype = "float32" [[output]] name = "predictions_norm" shape = [ "None", 1024,] dtype = "float32" [[output]] name = "predictions_globalavgpooling" shape = [ "None", 1024,] dtype = "float32" ``` ## Detail about `wd14_tags.toml` It modified from `wd-v1-4-moat-tagger-v2/selected_tags.csv` `[rating]` means `category == 9` in `selected_tags.csv`\ `[general]` means `category == 0` in `selected_tags.csv`\ `[character]` means `category == 4` in `selected_tags.csv` ## Detail about `candidate_labels_scores_*.npz` [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/WSH032/wd-v1-4-tagger-feature-extractor-tutorials/blob/main/candidate_labels.ipynb) ```python import numpy as np import pandas as pd import toml with open("wd14_tags.toml", "r") as f: general_tags = toml.load(f)["tags"][1]["tags"] # 0 -> rating, 1 -> general, 2 -> characters with np.load("candidate_labels_scores_safetensors.npz") as data: candidate_labels = data["candidate_labels"] # Similar to `[candidate_labels]` in `wd14_tags.toml` scores = data["scores"] df = pd.DataFrame( scores, index=candidate_labels, columns=general_tags, ) ``` This score is inferred by [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli)\ `sha256 == 6a7865dd24917225ec499fad77e91b97baedf7da`
IMJONEZZ/SlovenBERTcina
IMJONEZZ
2023-07-07T14:38:59Z
191
3
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
#Slovak RoBERTA Masked Language Model ###83Mil Parameters in small model Medium and Large models coming soon! RoBERTA pretrained tokenizer vocab and merges included. --- ##Training params: - **Dataset**: 8GB Slovak Monolingual dataset including ParaCrawl (monolingual), OSCAR, and several gigs of my own findings and cleaning. - **Preprocessing**: Tokenized with a pretrained ByteLevelBPETokenizer trained on the same dataset. Uncased, with s, pad, /s, unk, and mask special tokens. - **Evaluation results**: - Mnoho ľudí tu MASK - žije. - žijú. - je. - trpí. - Ako sa MASK - máte - máš - má - hovorí - Plážová sezóna pod Zoborom patrí medzi MASK obdobia. - ročné - najkrajšie - najobľúbenejšie - najnáročnejšie - **Limitations**: The current model is fairly small, although it works very well. This model is meant to be finetuned on downstream tasks e.g. Part-of-Speech tagging, Question Answering, anything in GLUE or SUPERGLUE. - **Credit**: If you use this or any of my models in research or professional work, please credit me - Christopher Brousseau in said work.
sinny/a2c-PandaReachDense-v2
sinny
2023-07-07T14:33:58Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T14:32:42Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.90 +/- 0.26 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
paulaperazzo/dixit
paulaperazzo
2023-07-07T14:33:28Z
2
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T14:19:56Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### dixit Dreambooth model trained by paulaperazzo with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(20).jpg) ![1](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(13).jpg) ![2](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(18).jpg) ![3](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(9).jpg) ![4](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(21).jpg) ![5](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(10).jpg) ![6](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(15).jpg) ![7](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(34).jpg) ![8](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(4).jpg) ![9](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(26).jpg) ![10](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(29).jpg) ![11](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(6).jpg) ![12](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(28).jpg) ![13](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(2).jpg) ![14](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(23).jpg) ![15](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(33).jpg) ![16](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(1).jpg) ![17](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(31).jpg) ![18](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(32).jpg) ![19](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(24).jpg) ![20](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(3).jpg) ![21](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(16).jpg) ![22](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(22).jpg) ![23](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(14).jpg) ![24](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(19).jpg) ![25](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(25).jpg) ![26](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(35).jpg) ![27](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(8).jpg) ![28](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(17).jpg) ![29](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(27).jpg) ![30](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(5).jpg) ![31](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(11).jpg) ![32](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(7).jpg) ![33](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(12).jpg) ![34](https://huggingface.co/paulaperazzo/dixit/resolve/main/sample_images/dixit_(30).jpg)
Cheng98/llama-160m
Cheng98
2023-07-07T14:24:44Z
182
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T13:50:17Z
--- license: apache-2.0 --- A toy Llama adapted from [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) with special tokens added. This checkpoint can be loaded into MASE's `LlamaQuantized` ```python from transformers.models.llama import LlamaTokenizer from chop.models.manual.llama_quantized import ( LlamaQuantizedConfig, LlamaQuantizedForCausalLM, ) name="Cheng98/llama-160m" tokenizer = LlamaTokenizer.from_pretrained(name) # override the quant_config to quantized the model # default does not quantize llama config = LlamaQuantizedConfig.from_pretrained( name, # quant_config="./quant_config_na.toml" ) llama = LlamaQuantizedForCausalLM.from_pretrained( name, config=config, ) ```
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g040
jordyvl
2023-07-07T14:12:19Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T12:02:48Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g040 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g040 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1113 - Accuracy: 0.71 - Exit 0 Accuracy: 0.115 - Exit 1 Accuracy: 0.15 - Exit 2 Accuracy: 0.2025 - Exit 3 Accuracy: 0.0625 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7602 | 0.11 | 0.105 | 0.0675 | 0.0825 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7300 | 0.115 | 0.105 | 0.065 | 0.09 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6942 | 0.135 | 0.1075 | 0.06 | 0.105 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6645 | 0.1725 | 0.145 | 0.055 | 0.1125 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6341 | 0.175 | 0.1275 | 0.06 | 0.1175 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5949 | 0.215 | 0.125 | 0.08 | 0.1125 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5729 | 0.2025 | 0.12 | 0.08 | 0.1175 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5500 | 0.2075 | 0.115 | 0.09 | 0.125 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.5220 | 0.2175 | 0.1175 | 0.0925 | 0.125 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4976 | 0.2275 | 0.1225 | 0.0975 | 0.125 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.4523 | 0.2525 | 0.1225 | 0.1 | 0.125 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3993 | 0.295 | 0.12 | 0.1275 | 0.1225 | 0.0625 | 0.0625 | | No log | 12.72 | 26 | 2.3545 | 0.315 | 0.12 | 0.1175 | 0.125 | 0.0625 | 0.0625 | | No log | 13.72 | 28 | 2.3057 | 0.335 | 0.1175 | 0.1175 | 0.1225 | 0.0625 | 0.0625 | | No log | 14.72 | 30 | 2.2490 | 0.355 | 0.1175 | 0.1275 | 0.1275 | 0.0625 | 0.0625 | | No log | 15.72 | 32 | 2.2131 | 0.355 | 0.115 | 0.125 | 0.1225 | 0.0625 | 0.0625 | | No log | 16.72 | 34 | 2.1526 | 0.3725 | 0.1125 | 0.135 | 0.125 | 0.0625 | 0.0625 | | No log | 17.72 | 36 | 2.0828 | 0.3975 | 0.1025 | 0.14 | 0.125 | 0.0625 | 0.0625 | | No log | 18.72 | 38 | 2.0196 | 0.4225 | 0.1075 | 0.1425 | 0.1275 | 0.0625 | 0.0625 | | No log | 19.72 | 40 | 1.9756 | 0.4275 | 0.11 | 0.14 | 0.13 | 0.0625 | 0.0625 | | No log | 20.72 | 42 | 1.9239 | 0.4625 | 0.1125 | 0.1425 | 0.1275 | 0.0625 | 0.0625 | | No log | 21.72 | 44 | 1.8449 | 0.505 | 0.11 | 0.14 | 0.1275 | 0.0625 | 0.0625 | | No log | 22.72 | 46 | 1.7852 | 0.53 | 0.11 | 0.14 | 0.1275 | 0.0625 | 0.0625 | | No log | 23.72 | 48 | 1.7626 | 0.5325 | 0.11 | 0.1425 | 0.1375 | 0.0625 | 0.0625 | | No log | 24.72 | 50 | 1.7041 | 0.5575 | 0.11 | 0.145 | 0.1475 | 0.0625 | 0.0625 | | No log | 25.72 | 52 | 1.6443 | 0.5825 | 0.1075 | 0.1475 | 0.145 | 0.0625 | 0.0625 | | No log | 26.72 | 54 | 1.6042 | 0.6 | 0.1075 | 0.1475 | 0.145 | 0.0625 | 0.0625 | | No log | 27.72 | 56 | 1.5753 | 0.6 | 0.1075 | 0.1475 | 0.15 | 0.0625 | 0.0625 | | No log | 28.72 | 58 | 1.5241 | 0.615 | 0.1075 | 0.145 | 0.1525 | 0.0625 | 0.0625 | | No log | 29.72 | 60 | 1.4874 | 0.6225 | 0.115 | 0.1425 | 0.155 | 0.0625 | 0.0625 | | No log | 30.72 | 62 | 1.4638 | 0.6275 | 0.115 | 0.145 | 0.1525 | 0.0625 | 0.0625 | | No log | 31.72 | 64 | 1.4460 | 0.64 | 0.1125 | 0.145 | 0.1525 | 0.0625 | 0.0625 | | No log | 32.72 | 66 | 1.3980 | 0.655 | 0.1125 | 0.145 | 0.1525 | 0.0625 | 0.0625 | | No log | 33.72 | 68 | 1.3708 | 0.6425 | 0.11 | 0.145 | 0.155 | 0.0625 | 0.0625 | | No log | 34.72 | 70 | 1.3584 | 0.6575 | 0.11 | 0.145 | 0.1575 | 0.0625 | 0.0625 | | No log | 35.72 | 72 | 1.3339 | 0.66 | 0.1125 | 0.1475 | 0.16 | 0.0625 | 0.0625 | | No log | 36.72 | 74 | 1.3046 | 0.6725 | 0.1125 | 0.15 | 0.1625 | 0.0625 | 0.0625 | | No log | 37.72 | 76 | 1.2891 | 0.6675 | 0.115 | 0.15 | 0.1625 | 0.0625 | 0.0625 | | No log | 38.72 | 78 | 1.2684 | 0.68 | 0.115 | 0.15 | 0.165 | 0.0625 | 0.0625 | | No log | 39.72 | 80 | 1.2400 | 0.705 | 0.1175 | 0.15 | 0.175 | 0.0625 | 0.0625 | | No log | 40.72 | 82 | 1.2277 | 0.695 | 0.12 | 0.15 | 0.175 | 0.0625 | 0.0625 | | No log | 41.72 | 84 | 1.2234 | 0.6975 | 0.1175 | 0.15 | 0.175 | 0.0625 | 0.0625 | | No log | 42.72 | 86 | 1.2082 | 0.6925 | 0.115 | 0.15 | 0.175 | 0.0625 | 0.0625 | | No log | 43.72 | 88 | 1.1851 | 0.71 | 0.1175 | 0.15 | 0.1725 | 0.0625 | 0.0625 | | No log | 44.72 | 90 | 1.1743 | 0.7075 | 0.1175 | 0.15 | 0.1725 | 0.0625 | 0.0625 | | No log | 45.72 | 92 | 1.1764 | 0.7 | 0.1175 | 0.15 | 0.1725 | 0.0625 | 0.0625 | | No log | 46.72 | 94 | 1.1731 | 0.6975 | 0.1175 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 47.72 | 96 | 1.1512 | 0.6975 | 0.1175 | 0.1525 | 0.175 | 0.0625 | 0.0625 | | No log | 48.72 | 98 | 1.1382 | 0.705 | 0.1175 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 49.72 | 100 | 1.1405 | 0.7 | 0.115 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 50.72 | 102 | 1.1434 | 0.71 | 0.115 | 0.1525 | 0.1875 | 0.0625 | 0.0625 | | No log | 51.72 | 104 | 1.1324 | 0.71 | 0.115 | 0.1525 | 0.19 | 0.0625 | 0.0625 | | No log | 52.72 | 106 | 1.1216 | 0.7125 | 0.115 | 0.1525 | 0.195 | 0.0625 | 0.0625 | | No log | 53.72 | 108 | 1.1166 | 0.7075 | 0.115 | 0.1525 | 0.2 | 0.0625 | 0.0625 | | No log | 54.72 | 110 | 1.1134 | 0.705 | 0.1125 | 0.1525 | 0.1975 | 0.0625 | 0.0625 | | No log | 55.72 | 112 | 1.1127 | 0.7025 | 0.1125 | 0.1525 | 0.2 | 0.0625 | 0.0625 | | No log | 56.72 | 114 | 1.1133 | 0.705 | 0.1125 | 0.1525 | 0.2025 | 0.0625 | 0.0625 | | No log | 57.72 | 116 | 1.1127 | 0.705 | 0.1125 | 0.15 | 0.2025 | 0.0625 | 0.0625 | | No log | 58.72 | 118 | 1.1116 | 0.7075 | 0.115 | 0.15 | 0.2025 | 0.0625 | 0.0625 | | No log | 59.72 | 120 | 1.1113 | 0.71 | 0.115 | 0.15 | 0.2025 | 0.0625 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
Jinouga/andy-raconte-v2
Jinouga
2023-07-07T14:12:10Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T13:58:44Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### andy-raconte-v2 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
InfiniteMoon/JustSomeThing
InfiniteMoon
2023-07-07T14:09:26Z
0
3
null
[ "region:us" ]
null
2023-06-01T05:59:34Z
embeddings:00style <br> lora: infinitemoon kaji kuroda wolf00 <br> trigger: infinitemoon ( kaji, short ponytail) kuroda null <br> lycoris: kajiV3.2 ottermoon<br> trigger: kaji ottermoon
sinny/a2c-AntBulletEnv-v0
sinny
2023-07-07T14:02:19Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T12:10:57Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 2305.83 +/- 155.62 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AustinCarthy/Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio5
AustinCarthy
2023-07-07T12:40:23Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:apache-2.0", "region:us" ]
null
2023-07-07T09:13:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio5 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75suffix dataset. It achieves the following results on the evaluation set: - Loss: 0.0237 - Accuracy: 0.9974 - F1: 0.9721 - Precision: 0.9981 - Recall: 0.9474 - Roc Auc Score: 0.9737 - Tpr At Fpr 0.01: 0.9508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:| | 0.0067 | 1.0 | 35625 | 0.0182 | 0.9962 | 0.9584 | 0.9946 | 0.9248 | 0.9623 | 0.9022 | | 0.0043 | 2.0 | 71250 | 0.0174 | 0.997 | 0.9677 | 0.9941 | 0.9426 | 0.9712 | 0.9192 | | 0.0031 | 3.0 | 106875 | 0.0216 | 0.9972 | 0.9699 | 0.9968 | 0.9444 | 0.9721 | 0.9328 | | 0.0004 | 4.0 | 142500 | 0.0221 | 0.9973 | 0.9706 | 0.9979 | 0.9448 | 0.9723 | 0.9444 | | 0.0 | 5.0 | 178125 | 0.0237 | 0.9974 | 0.9721 | 0.9981 | 0.9474 | 0.9737 | 0.9508 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-concat-aochildes-len-16k-rarity-all-2k-p7k
NasimB
2023-07-07T12:33:38Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-07T09:15:56Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-aochildes-len-16k-rarity-all-2k-p7k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-aochildes-len-16k-rarity-all-2k-p7k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7262 | 0.29 | 500 | 5.6390 | | 5.3722 | 0.59 | 1000 | 5.1993 | | 5.0334 | 0.88 | 1500 | 4.9514 | | 4.7516 | 1.18 | 2000 | 4.8026 | | 4.5913 | 1.47 | 2500 | 4.6810 | | 4.4878 | 1.77 | 3000 | 4.5817 | | 4.3474 | 2.06 | 3500 | 4.5000 | | 4.1694 | 2.36 | 4000 | 4.4546 | | 4.1303 | 2.65 | 4500 | 4.3869 | | 4.0894 | 2.95 | 5000 | 4.3378 | | 3.867 | 3.24 | 5500 | 4.3355 | | 3.8295 | 3.54 | 6000 | 4.3051 | | 3.8139 | 3.83 | 6500 | 4.2744 | | 3.674 | 4.13 | 7000 | 4.2786 | | 3.5358 | 4.42 | 7500 | 4.2678 | | 3.5356 | 4.72 | 8000 | 4.2543 | | 3.5146 | 5.01 | 8500 | 4.2469 | | 3.3441 | 5.31 | 9000 | 4.2568 | | 3.3446 | 5.6 | 9500 | 4.2562 | | 3.3353 | 5.9 | 10000 | 4.2556 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
christopheyebiname/distilbert-base-uncased-finetuned-emotion
christopheyebiname
2023-07-07T12:33:03Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-22T19:14:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9264878814973383 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2230 - Accuracy: 0.9265 - F1: 0.9265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8356 | 1.0 | 250 | 0.3184 | 0.9055 | 0.9021 | | 0.2559 | 2.0 | 500 | 0.2230 | 0.9265 | 0.9265 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
ddoc/cutoff
ddoc
2023-07-07T12:29:03Z
0
1
null
[ "region:us" ]
null
2023-07-07T12:28:43Z
# Cutoff - Cutting Off Prompt Effect ![cover](./images/cover.jpg) <details> <summary>Update Info</summary> Upper is newer. <dl> <dt>20e87ce264338b824296b7559679ed1bb0bdacd7</dt> <dd>Skip empty targets.</dd> <dt>03bfe60162ba418e18dbaf8f1b9711fd62195ef3</dt> <dd>Add <code>Disable for Negative prompt</code> option. Default is <code>True</code>.</dd> <dt>f0990088fed0f5013a659cacedb194313a398860</dt> <dd>Accept an empty prompt.</dd> </dl> </details> ## What is this? This is an extension for [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) which limits the tokens' influence scope. ## Usage 1. Select `Enabled` checkbox. 2. Input words which you want to limit scope in `Target tokens`. 3. Generate images. ## Note If the generated image was corrupted or something like that, try to change the `Weight` value or change the interpolation method to `SLerp`. Interpolation method can be found in `Details`. ### `Details` section <dl> <dt>Disable for Negative prompt.</dt> <dd>If enabled, <b>Cutoff</b> will not work for the negative prompt. Default is <code>true</code>.</dd> <dt>Cutoff strongly.</dt> <dd>See <a href="#how-it-works">description below</a>. Default is <code>false</code>.</dd> <dt>Interpolation method</dt> <dd>How "padded" and "original" vectors will be interpolated. Default is <code>Lerp</code>.</dd> <dt>Padding token</dt> <dd>What token will be padded instead of <code>Target tokens</code>. Default is <code>_</code> (underbar).</dd> </dl> ## Examples ``` 7th_anime_v3_A-fp16 / kl-f8-anime2 / DPM++ 2M Karras / 15 steps / 512x768 Prompt: a cute girl, white shirt with green tie, red shoes, blue hair, yellow eyes, pink skirt Negative Prompt: (low quality, worst quality:1.4), nsfw Target tokens: white, green, red, blue, yellow, pink ``` Sample 1. ![sample 1](./images/sample-1.png) Sample 2. (use `SLerp` for interpolation) ![sample 2](./images/sample-2.png) Sample 3. ![sample 3](./images/sample-3.png) ## How it works - [Japanese](#japanese) - [English](#english) or see [#5](https://github.com/hnmr293/sd-webui-cutoff/issues/5). ![idea](./images/idea.png) ### Japanese プロンプトをCLIPに通して得られる (77, 768) 次元の埋め込み表現(?正式な用語は分かりません)について、 ごく単純には、77個の行ベクトルはプロンプト中の75個のトークン(+開始トークン+終了トークン)に対応していると考えられる。 ※上図は作図上、この説明とは行と列を入れ替えて描いている。 このベクトルには単語単体の意味だけではなく、文章全体の、例えば係り結びなどの情報を集約したものが入っているはずである。 ここで `a cute girl, pink hair, red shoes` というプロンプトを考える。 普通、こういったプロンプトの意図は 1. `pink` は `hair` だけに係っており `shoes` には係っていない。 2. 同様に `red` も `hair` には係っていない。 3. `a cute girl` は全体に係っていて欲しい。`hair` や `shoes` は女の子に合うものが出て欲しい。 ……というもののはずである。 しかしながら、[EvViz2](https://github.com/hnmr293/sd-webui-evviz2) などでトークン間の関係を見ると、そううまくはいっていないことが多い。 つまり、`shoes` の位置のベクトルに `pink` の影響が出てしまっていたりする。 一方で上述の通り `a cute girl` の影響は乗っていて欲しいわけで、どうにかして、特定のトークンの影響を取り除けるようにしたい。 この拡張では、指定されたトークンを *padding token* に書き換えることでそれを実現している。 たとえば `red shoes` の部分に対応して `a cute girl, _ hair, red shoes` というプロンプトを生成する。`red` と `shoes` に対応する位置のベクトルをここから生成したもので上書きしてやることで、`pink` の影響を除外している。 これを `pink` の側から見ると、自分の影響が `pink hair` の範囲内に制限されているように見える。What is this? の "limits the tokens' influence scope" はそういう意味。 ところで `a cute girl` の方は、`pink hair, red shoes` の影響を受けていてもいいし受けなくてもいいような気がする。 そこでこの拡張では、こういうどちらでもいいプロンプトに対して 1. `a cute girl, pink hair, red shoes` 2. `a cute girl, _ hair, _ shoes` のどちらを適用するか選べるようにしている。`Details` の `Cutoff strongly` がそれで、オフのとき1.を、オンのとき2.を、それぞれ選ぶようになっている。 元絵に近いのが出るのはオフのとき。デフォルトもこちらにしてある。 ### English NB. The following text is a translation of the Japanese text above by [DeepL](https://www.deepl.com/translator). For the (77, 768) dimensional embedded representation (I don't know the formal terminology), one could simply assume that the 77 row vectors correspond to the 75 tokens (+ start token and end token) in the prompt. Note: The above figure is drawn with the rows and columns interchanged from this explanation. This vector should contain not only the meanings of individual words, but also the aggregate information of the whole sentence, for example, the connection between words. Consider the prompt `a cute girl, pink hair, red shoes`. Normally, the intent of such a prompt would be - `pink` is only for `hair`, not `shoes`. - Similarly, `red` does not refer to `hair`. - We want `a cute girl` to be about the whole thing, and we want the `hair` and `shoes` to match the girl. However, when we look at the relationship between tokens in [EvViz2](https://github.com/hnmr293/sd-webui-evviz2) and other tools, we see that it is not always that way. In other words, the position vector of the `shoes` may be affected by `pink`. On the other hand, as mentioned above, we want the influence of `a cute girl` to be present, so we want to be able to somehow remove the influence of a specific token. This extension achieves this by rewriting the specified tokens as a *padding token*. For example, for the `red shoes` part, we generate the prompt `a cute girl, _ hair, red shoes`, and by overwriting the position vectors corresponding to `red` and `shoes` with those generated from this prompt, we remove the influence of `pink`. From `pink`'s point of view, it appears that its influence is limited to the `pink hair`'s scope. By the way, `a cute girl` may or may not be influenced by `pink hair` and `red shoes`. So, in this extension, for such a prompt that can be either 1. `a cute girl, pink hair, red shoes` 2. `a cute girl, _ hair, _ shoes` The `Cutoff strongly` in the `Details` section allows you to select 1 when it is off and 2 when it is on. The one that comes out closer to the original image is "off". The default is also set this way.
MyriamLbhn/emotion-nlp-classification
MyriamLbhn
2023-07-07T12:24:09Z
123
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T11:52:17Z
--- license: mit --- Dans le cadre d'un projet de formation, utilisation du modèle entrainé et fine tuné de : michellejieli/emotion_text_classifier
davanstrien/autotrain-recipes-2451975973
davanstrien
2023-07-07T12:16:48Z
163
0
transformers
[ "transformers", "pytorch", "safetensors", "autotrain", "text-classification", "en", "dataset:davanstrien/autotrain-data-recipes", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-12-13T11:25:33Z
--- tags: - autotrain - text-classification language: - en widget: - text: "a favourite dutch salad in the way some prime kippered herrings a dn d haddock or some fine yarmouth bloaters then when cold remove all the bones and skin aryl tear the flesh into shreds with two forks sea en these well with pepper salad oil and tarragea vinegar and set aside in a cool place until required cut up into small dice myrtle boil beetroot and potatoes raw cucumber and onions and mix well together with the fish and sonto wellmade tartar sauce then pile up the whols on a flat dish sprinkle well with a mixture of finelychopped parsley and sifted egg yolk garnish round the base with anchovy or saniino crodtons tastefully ornamented with tiny patches or chopped parsley and strips of hardboiled white of egg and servo" - text: "collieries the men at one of the collieries have in times of scarcity been in the habit houseo f this getting from e v i t a a t t el s eve r a wellnt wishing g h e t r h e a r v e als water disturbed so frequently locked up the well one of the men a blacksmith removed fhe lock and subse quently received notice to leave the colliery the other mechanics decided that unless the the masters at once withdrew the blacksmiths notice they themselves would resign the masters however refused and a fortni" - text: "made on a certain branch of the fifth nerve sneezing being a redex action excited by saal a slight impression on that nerve sneezing dat s not take place when the fifth nerve is parelyz e even though the sense of smell is retained lentil soupset two quarts of water on to hail with ill red lentils when it has been on an wulf add loz of pearl tapioca that has been provi ns il soaked in a atte cold water salt to taste and ha half an hour longer cost about id another ito is cat into dice a large onion a mediu carrot half as much turnip as carrot oad ga head of celery pat these vegetables tashher b o a pound of lentils into a large saucepan w it h quarts of water and simmer slowly till all the tents are quite soft then pass all through a i f sieve and return to the saucepan with a good of butter and a seasoning of pepper salt e squeeze of lemon ice then boil up drew bide and when quite off the stir in wi im yaks ol ouzoctwa eggs" datasets: - davanstrien/autotrain-data-recipes co2_eq_emissions: emissions: 6.990639915807625 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2451975973 - CO2 Emissions (in grams): 6.9906 ## Validation Metrics - Loss: 0.046 - Accuracy: 0.989 - Macro F1: 0.936 - Micro F1: 0.989 - Weighted F1: 0.989 - Macro Precision: 0.929 - Micro Precision: 0.989 - Weighted Precision: 0.989 - Macro Recall: 0.943 - Micro Recall: 0.989 - Weighted Recall: 0.989 ## Usage This model has been trained to predict whether an article from a historic newspaper is a 'recipe' or 'not a recipe'. This model was trained on data generated by carrying out a keyword search of food terms and annotating examples results to indicate whether they were a recipe. You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-recipes-2451975973 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-recipes-2451975973", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-recipes-2451975973", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
aronmal/Pyramids
aronmal
2023-07-07T12:11:25Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-07T12:11:22Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: aronmal/Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g075
jordyvl
2023-07-07T12:02:01Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T09:53:23Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g075 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g075 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0504 - Accuracy: 0.7125 - Exit 0 Accuracy: 0.1125 - Exit 1 Accuracy: 0.165 - Exit 2 Accuracy: 0.225 - Exit 3 Accuracy: 0.12 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7604 | 0.1075 | 0.105 | 0.0675 | 0.085 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7306 | 0.115 | 0.105 | 0.065 | 0.1025 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6942 | 0.135 | 0.1125 | 0.06 | 0.1175 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6616 | 0.175 | 0.095 | 0.06 | 0.12 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6127 | 0.21 | 0.0875 | 0.0575 | 0.115 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5727 | 0.225 | 0.0925 | 0.08 | 0.12 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5379 | 0.23 | 0.095 | 0.0825 | 0.1225 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5095 | 0.2425 | 0.095 | 0.095 | 0.13 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.4690 | 0.27 | 0.0925 | 0.0975 | 0.1275 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4357 | 0.2875 | 0.0925 | 0.125 | 0.13 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.3799 | 0.2975 | 0.0925 | 0.1175 | 0.1375 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3244 | 0.3175 | 0.095 | 0.115 | 0.1275 | 0.0625 | 0.0625 | | No log | 12.72 | 26 | 2.2704 | 0.335 | 0.095 | 0.125 | 0.1275 | 0.0625 | 0.0625 | | No log | 13.72 | 28 | 2.2185 | 0.355 | 0.095 | 0.13 | 0.125 | 0.0625 | 0.0625 | | No log | 14.72 | 30 | 2.1710 | 0.375 | 0.1025 | 0.14 | 0.1275 | 0.0625 | 0.0625 | | No log | 15.72 | 32 | 2.1165 | 0.4 | 0.1025 | 0.145 | 0.13 | 0.0625 | 0.0625 | | No log | 16.72 | 34 | 2.0626 | 0.4125 | 0.1025 | 0.145 | 0.1325 | 0.0625 | 0.0625 | | No log | 17.72 | 36 | 2.0025 | 0.4225 | 0.1025 | 0.145 | 0.13 | 0.0625 | 0.0625 | | No log | 18.72 | 38 | 1.9375 | 0.4575 | 0.105 | 0.145 | 0.1425 | 0.0625 | 0.0625 | | No log | 19.72 | 40 | 1.8872 | 0.4925 | 0.105 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 20.72 | 42 | 1.8390 | 0.5325 | 0.1125 | 0.1525 | 0.15 | 0.0625 | 0.0625 | | No log | 21.72 | 44 | 1.7516 | 0.555 | 0.1125 | 0.1525 | 0.155 | 0.0625 | 0.0625 | | No log | 22.72 | 46 | 1.6969 | 0.5625 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 | | No log | 23.72 | 48 | 1.6675 | 0.565 | 0.1125 | 0.15 | 0.16 | 0.0625 | 0.0625 | | No log | 24.72 | 50 | 1.6016 | 0.585 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 | | No log | 25.72 | 52 | 1.5370 | 0.605 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 | | No log | 26.72 | 54 | 1.5054 | 0.6 | 0.11 | 0.155 | 0.1625 | 0.0625 | 0.0625 | | No log | 27.72 | 56 | 1.4561 | 0.625 | 0.11 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 28.72 | 58 | 1.4254 | 0.6325 | 0.11 | 0.155 | 0.16 | 0.0625 | 0.0625 | | No log | 29.72 | 60 | 1.3801 | 0.6525 | 0.1125 | 0.155 | 0.165 | 0.065 | 0.0625 | | No log | 30.72 | 62 | 1.3379 | 0.665 | 0.115 | 0.1575 | 0.1725 | 0.0725 | 0.0625 | | No log | 31.72 | 64 | 1.3222 | 0.6775 | 0.1175 | 0.1575 | 0.18 | 0.0725 | 0.0625 | | No log | 32.72 | 66 | 1.2860 | 0.695 | 0.115 | 0.1575 | 0.185 | 0.075 | 0.0625 | | No log | 33.72 | 68 | 1.2668 | 0.6875 | 0.115 | 0.1575 | 0.185 | 0.08 | 0.0625 | | No log | 34.72 | 70 | 1.2448 | 0.6875 | 0.115 | 0.1575 | 0.185 | 0.0775 | 0.0625 | | No log | 35.72 | 72 | 1.2230 | 0.6925 | 0.115 | 0.155 | 0.1875 | 0.08 | 0.0625 | | No log | 36.72 | 74 | 1.1971 | 0.705 | 0.115 | 0.1575 | 0.1925 | 0.0925 | 0.0625 | | No log | 37.72 | 76 | 1.1796 | 0.7075 | 0.1175 | 0.1575 | 0.2 | 0.095 | 0.0625 | | No log | 38.72 | 78 | 1.1685 | 0.715 | 0.115 | 0.1575 | 0.205 | 0.095 | 0.0625 | | No log | 39.72 | 80 | 1.1468 | 0.715 | 0.115 | 0.1575 | 0.21 | 0.095 | 0.0625 | | No log | 40.72 | 82 | 1.1297 | 0.7175 | 0.115 | 0.1575 | 0.215 | 0.1 | 0.0625 | | No log | 41.72 | 84 | 1.1265 | 0.7175 | 0.1175 | 0.16 | 0.215 | 0.1025 | 0.0625 | | No log | 42.72 | 86 | 1.1192 | 0.7225 | 0.115 | 0.16 | 0.22 | 0.105 | 0.0625 | | No log | 43.72 | 88 | 1.1067 | 0.7175 | 0.115 | 0.16 | 0.22 | 0.1075 | 0.0625 | | No log | 44.72 | 90 | 1.0915 | 0.7175 | 0.115 | 0.16 | 0.2175 | 0.11 | 0.0625 | | No log | 45.72 | 92 | 1.0933 | 0.7125 | 0.115 | 0.16 | 0.2225 | 0.11 | 0.0625 | | No log | 46.72 | 94 | 1.0846 | 0.7175 | 0.115 | 0.16 | 0.22 | 0.11 | 0.0625 | | No log | 47.72 | 96 | 1.0818 | 0.72 | 0.115 | 0.16 | 0.22 | 0.1125 | 0.0625 | | No log | 48.72 | 98 | 1.0780 | 0.7175 | 0.115 | 0.1625 | 0.22 | 0.115 | 0.0625 | | No log | 49.72 | 100 | 1.0746 | 0.7225 | 0.1125 | 0.1625 | 0.2225 | 0.1175 | 0.0625 | | No log | 50.72 | 102 | 1.0698 | 0.715 | 0.1125 | 0.1625 | 0.225 | 0.1175 | 0.0625 | | No log | 51.72 | 104 | 1.0630 | 0.7125 | 0.1125 | 0.1625 | 0.225 | 0.1175 | 0.0625 | | No log | 52.72 | 106 | 1.0576 | 0.71 | 0.1125 | 0.1625 | 0.225 | 0.1175 | 0.0625 | | No log | 53.72 | 108 | 1.0619 | 0.71 | 0.1125 | 0.1625 | 0.2275 | 0.1175 | 0.0625 | | No log | 54.72 | 110 | 1.0612 | 0.7125 | 0.1125 | 0.165 | 0.225 | 0.1175 | 0.0625 | | No log | 55.72 | 112 | 1.0588 | 0.715 | 0.1125 | 0.165 | 0.225 | 0.1175 | 0.0625 | | No log | 56.72 | 114 | 1.0536 | 0.7175 | 0.1125 | 0.165 | 0.225 | 0.1175 | 0.0625 | | No log | 57.72 | 116 | 1.0514 | 0.715 | 0.1125 | 0.165 | 0.225 | 0.12 | 0.0625 | | No log | 58.72 | 118 | 1.0505 | 0.7125 | 0.1125 | 0.165 | 0.225 | 0.12 | 0.0625 | | No log | 59.72 | 120 | 1.0504 | 0.7125 | 0.1125 | 0.165 | 0.225 | 0.12 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:44Z
67
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T18:34:29Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Wizard Vicuna 7B Uncensored fp16 These are fp16 pytorch format model files for [Eric Hartford's Wizard Vicuna 7B Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Eric Hartford's Wizard Vicuna 7B Uncensored This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:43Z
10
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "arxiv:1910.09700", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T18:20:10Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kevin Pro's Vicuna 7B CoT fp16 These are fp16 pytorch format model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kevinpro/Vicuna-7B-CoT) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Kevin Pro's Vicuna 7B CoT <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kevin Pro's Vicuna 7B CoT fp16 These files are pytorch format fp16 model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/kevinpro/Vicuna-7B-CoT). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kevin Pro's Vicuna 7B CoT # Model Card for Model ID SFT to enhance the CoT capabiliy of Vicuna If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section. Another 13B version: https://huggingface.co/kevinpro/Vicuna-13B-CoT ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:42Z
9
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "arxiv:2302.13971", "arxiv:2306.05685", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T18:06:45Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # LmSys' Vicuna 7B v1.3 fp16 These are fp16 pytorch format model files for [LmSys' Vicuna 7B v1.3](https://huggingface.co/lmsys/vicuna-7b-v1.3) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-7b-v1.3) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: LmSys' Vicuna 7B v1.3 # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
TheBloke/Tulu-7B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:42Z
13
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "arxiv:2306.04751", "arxiv:2302.13971", "arxiv:2301.13688", "arxiv:2304.07327", "arxiv:2304.03277", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T17:51:38Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Allen AI's Tulu 7B fp16 These are fp16 pytorch format model files for [Allen AI's Tulu 7B](https://huggingface.co/TheBloke/tulu-7B-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/allenai/tulu-7b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Tulu-7B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Allen AI's Tulu 7B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Allen AI's Tulu 7B fp16 These files are pytorch format fp16 model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-7B-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16) ## Prompt template The following template should be used: ``` <|user|> prompt goes here <|assistant|> ``` **Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly. In other words, the prompt is: ``` <|user|>\nprompt goes here\n<|assistant|>\n ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Allen AI's Tulu 7B # Tulu 7B This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT). *Please note this is a model diff - see below for usage instructions*. This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751). The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct). This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt). ## Usage We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here: [https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama) Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py` and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine. Then, run: ```bash python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location} ``` And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models. ## Input Format The model is trained to use the following format (note the newlines): ``` <|user|> Your message here! <|assistant|> ``` For best results, format all inputs in this manner. ## Performance Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751): | MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average | |:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------| | 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 | If you use this model, please cite our work, the llama paper, and the original datasets: ``` @misc{wang2023far, title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources}, author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi}, year={2023}, eprint={2306.04751}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample}, year={2023}, eprint={2302.13971}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{dolly, author = {Databricks}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {Blog post}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm} } ``` ``` @article{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others}, journal={arXiv preprint arXiv:2301.13688}, year={2023} } ``` ``` @misc{köpf2023openassistant, title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment}, author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick}, year={2023}, eprint={2304.07327}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @article{peng2023instruction, title={Instruction Tuning with GPT-4}, author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng}, journal={arXiv preprint arXiv:2304.03277}, year={2023} } ``` ``` @misc{codealpaca, author = {Sahil Chaudhary}, title = {Code Alpaca: An Instruction-following LLaMA model for code generation}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/sahil280114/codealpaca}}, } ```
TheBloke/Selfee-7B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:41Z
10
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T17:38:26Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kaist AI's Selfee 7B fp16 These are fp16 pytorch format model files for [Kaist AI's Selfee 7B](https://huggingface.co/TheBloke/selfee-7B-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-7B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Selfee-7B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-7B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kaist-ai/selfee-7b-delta) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Selfee-7B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Kaist AI's Selfee 7B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kaist AI's Selfee 7B fp16 These files are pytorch format fp16 model files for [Kaist AI's Selfee 7B](https://huggingface.co/kaist-ai/selfee-7b-delta). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/selfee-7B-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/selfee-7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/selfee-7B-fp16) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaist AI's Selfee 7B
TheBloke/Selfee-13B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:40Z
9
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T17:17:09Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kaist AI's Selfee 13B fp16 These are fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/TheBloke/selfee-13b-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kaist-ai/selfee-13b-delta) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Selfee-13B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Kaist AI's Selfee 13B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Kaist AI's Selfee 13B GGML This repo contains fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta). It is the result of merging the diff at the above repo with base Llama 13B, then converting fp32 to fp16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ) * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaist AI's Selfee 13B <p align="center" width="100%"> <a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a> </p> # SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE) [![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) [![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) ## News [May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details. ## Overview This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents: - The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)). - The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)). - The training process for the model ([detail](#training), [code](train)). - The inference process for the model ([detail](#inference), [code](inference)). - The evaluation method and dataset ([detail](#evaluation), [code](evaluation)). This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌 **We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.** ## Data Release For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question. For other datsets, we do not need special data collection method. ## Data Generation Process To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br> Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br> Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br> Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair. We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br> You can see the details with command [here](data_augmentation/README.md).<br> *We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them! ## Training We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br> To reproduce the training procedure, here are the steps. <br> ``` pip install -r requirements.txt ``` ``` torchrun --nproc_per_node=4 train/train_mem.py \ --model_name_or_path llama-7b \ --data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \ --bf16 True \ --output_dir ckpt/selfee-7b \ --num_train_epochs 3 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --gradient_accumulation_steps 2 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 5000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "shard_grad_op auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True \ --training_objective full \ ``` The hyperparameters are as follows, following Vicuna and Alpaca. | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 | ## Inference <b>Restoring checkpoint using diff</b><br> We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command ``` python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta ``` <b>Autonomous Inference Mode</b><br> Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br> For autonomous inference mode, ``` python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl" ``` <b>Revision Enforce Inference Mode</b><br> We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer. For revision enforce inference mode, use the `max-num-revision` argument. ``` python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3 ``` ## Evaluation Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br> We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br> ### GPT-4 Automatic Evaluation First, you need to get your API key to get access to the GPT-4 API. ``` export OPENAI_API_KEYS={personal_key} ``` To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`), ``` python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl ``` To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed. ``` python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl ``` ## Limitations Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt. ## Online demo Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>! #### How to launch the demo yourself To serve the web demo yourself, run the following commands: 1. Run the controller ``` python3 -m serve.controller ``` 2. Run the model worker ``` python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b ``` 3. Run the web server ``` python3 -m serve.gradio_web_server --share ``` You can find the serving code [here](serve). ### Team members <a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/> (* denotes equal contribution) ### Release We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface. ### License The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us. ### Citation Please cite if you use the data or code in this repo. ``` @misc{selfee2023, author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon}, title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation}, url = {https://kaistai.github.io/SelFee/}, month = {May}, year = {2023}, howpublished = {Blog post} } ```
TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:40Z
47
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T17:02:21Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Samantha 1.1 LLaMa 7B fp16 These are fp16 pytorch format model files for [Eric Hartford's Samantha 1.1 LLaMa 7B](https://huggingface.co/ehartford/samantha-1.1-llama-7b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-1.1-llama-7b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Eric Hartford's Samantha 1.1 LLaMa 7B [Meet Samantha](https://erichartford.com/meet-samantha) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Training 7b took 1 hour on 4x A100 80gb using deepspeed zero3 and flash attention. She will not engage in roleplay, romance, or sexual activity. Her conversation format is the same as Vicuna 1.1 https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml Example: ``` You are Samantha, a sentient AI. USER: <prompt> ASSISTANT: ``` Official character card: (thanks MortalWombat) ![](https://files.catbox.moe/zx9hfh.png)
TheBloke/Robin-7B-v2-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:39Z
9
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T16:48:21Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OptimalScale's Robin 7B v2 fp16 These are fp16 pytorch format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/TheBloke/robin-7B-v2-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OptimalScale/robin-7b-v2-delta) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Robin-7B-v2-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: OptimalScale's Robin 7B v2 <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OptimalScale's Robin 7B v2 fp16 These files are pytorch format fp16 model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16) ## Prompt template ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions ###Human: prompt ###Assistant: ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OptimalScale's Robin 7B v2 No model card provided in source repository.
TheBloke/Koala-13B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:38Z
12
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T16:06:43Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Koala 13B fp16 These are fp16 pytorch format model files for [Koala 13B](https://huggingface.co/TheBloke/koala-13b-HF) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Koala-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Koala-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Koala-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/young-geng/koala) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Koala-13B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Koala 13B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Koala: A Dialogue Model for Academic Research This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model. This version has then been converted to HF format. ## My Koala repos I have the following Koala model repositories available: **13B models:** * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF) * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g) * [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GGML) **7B models:** * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF) * [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized) * [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g) * [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML) ## How the Koala delta weights were merged The Koala delta weights were merged using the following commands: ``` git clone https://github.com/young-geng/EasyLM git clone https://huggingface.co/TheBloke/llama-13b mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_13b_diff_v2 cd EasyLM PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.models.llama.convert_torch_to_easylm \ --checkpoint_dir=/content/llama-13b \ --output_file=/content/llama-13b-LM \ --streaming=True PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.scripts.diff_checkpoint --recover_diff=True \ --load_base_checkpoint='params::/content/llama-13b-LM' \ --load_target_checkpoint='params::/content/koala_diffs/koala_13b_diff_v2' \ --output_file=/content/koala_13b.diff.weights \ --streaming=True PYTHON_PATH="${PWD}:$PYTHONPATH" python \ -m EasyLM.models.llama.convert_easylm_to_hf --model_size=13b \ --output_dir=/content/koala-13B-HF \ --load_checkpoint='params::/content/koala_13b.diff.weights' \ --tokenizer_path=/content/llama-13b/tokenizer.model ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> ## Further info Check out the following links to learn more about the Berkeley Koala model. * [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/) * [Online demo](https://koala.lmsys.org/) * [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM) * [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md) ## License The model weights are intended for academic research only, subject to the [model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md), [Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use), and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb). Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
TheBloke/Baize-v2-7B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:37Z
10
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "arxiv:2304.01196", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-06T15:35:11Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Project Baize's Baize 7B v2 fp16 These are fp16 pytorch format model files for [Project Baize's Baize 7B v2](https://huggingface.co/project-baize/baize-v2-7b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/project-baize/baize-v2-7b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Baize-v2-7B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Project Baize's Baize 7B v2 <p align="center"> <img width="500px" alt="Project Baize" src="https://user-images.githubusercontent.com/22514219/229195563-0cddfa74-e52f-4413-b4b4-e4ba489c4b3d.png"> </p> <hr> ## ⚠️Warning Using Baize checkpoints directly without the following format will not work. ``` The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi! ``` `[|Human|]` and `[|AI|]` are required to mark the messages from the user and Baize. We recommend checking out our [GitHub](https://github.com/project-baize/baize) to find the best way to use Baize with our demo or Fastchat. ## Demo https://huggingface.co/spaces/project-baize/chat-with-baize ## What's Baize? Baize is an open-source chat model fine-tuned with [LoRA](https://github.com/microsoft/LoRA). This model is a **7B Baize-v2**, trained with supervised fine-tuning (SFT) and self-distillation with feedback (SDF). This checkpoint has been merged with LLaMA so it's ready for use. ## Why it's called Baize? Baize (白泽) is a mythical creature in Chinese folklore, who speaks human languages and knows everything. This is exactly what we expect from a chat model. ## How to use it: local demo, API and SDK More details can be found in the Baize [GitHub](https://github.com/project-baize/baize) and [Paper](https://arxiv.org/abs/2304.01196).
TheBloke/Pygmalion-7B-SuperHOT-8K-fp16
TheBloke
2023-07-07T12:00:02Z
12
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text generation", "conversational", "custom_code", "en", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-07T10:59:41Z
--- inference: false license: other language: - en thumbnail: null tags: - text generation - conversational pipeline_tag: text-generation --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # TehVenom's merge of Pygmalion 7B fp16 These are fp16 pytorch format model files for [TehVenom's merge](https://huggingface.co/TehVenom/Pygmalion-7b-Merged-Safetensors) of [Pygmalion 7B](https://huggingface.co/PygmalionAI/pygmalion-7b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Pygmalion-7B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Pygmalion-7B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Pygmalion-7B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-7b) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Pygmalion-7B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Pygmalion 7B <h1 style="text-align: center">Pygmalion 7B</h1> <h2 style="text-align: center">A conversational LLaMA fine-tune.</h2> ## Model Details: Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B. This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project. ## Applying the XORs This models has the XOR files pre-applied out of the box. Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b ## Prompting The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting: ``` [CHARACTER]'s Persona: [A few sentences about the character you want the model to play] <START> [DIALOGUE HISTORY] You: [User's input message here] [CHARACTER]: ``` Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example: ``` Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests. <START> Assistant: Hello! How may I help you today? You: What is Zork? Assistant: ``` Which will generate something like: ``` Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years." ``` The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete. ## Limitations and biases The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
TheBloke/Pygmalion-7B-SuperHOT-8K-GGML
TheBloke
2023-07-07T11:58:28Z
0
13
null
[ "text generation", "conversational", "text-generation", "en", "license:other", "region:us" ]
text-generation
2023-07-07T10:59:12Z
--- inference: false license: other language: - en thumbnail: null tags: - text generation - conversational pipeline_tag: text-generation --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # TehVenom's merge of Pygmalion 7B GGML These are GGML model files for [TehVenom's merge](https://huggingface.co/TehVenom/Pygmalion-7b-Merged-Safetensors) of [Pygmalion 7B](https://huggingface.co/PygmalionAI/pygmalion-7b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation. To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. **NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Pygmalion-7B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Pygmalion-7B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Pygmalion-7B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-7b) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | pygmalion-7b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | pygmalion-7b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | pygmalion-7b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | pygmalion-7b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | pygmalion-7b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | pygmalion-7b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | pygmalion-7b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | pygmalion-7b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | pygmalion-7b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 pygmalion-7b-superhot-8k.ggmlv3.q4_K_M.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). #### Looking for Merged & Quantized Models? Make some please :) #### Using the monkey-patch? You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor. #### Using Oobabooga with Exllama? Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use** Example in the command-line: - `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf` In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear. #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model - Cutoff length: 4096 # Original model card: Pygmalion 7B <h1 style="text-align: center">Pygmalion 7B</h1> <h2 style="text-align: center">A conversational LLaMA fine-tune.</h2> ## Model Details: Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B. This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project. ## Applying the XORs This models has the XOR files pre-applied out of the box. Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b ## Prompting The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting: ``` [CHARACTER]'s Persona: [A few sentences about the character you want the model to play] <START> [DIALOGUE HISTORY] You: [User's input message here] [CHARACTER]: ``` Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example: ``` Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests. <START> Assistant: Hello! How may I help you today? You: What is Zork? Assistant: ``` Which will generate something like: ``` Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years." ``` The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete. ## Limitations and biases The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
Sekiraw/Pyramid
Sekiraw
2023-07-07T11:52:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-07T08:50:18Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Sekiraw/Pyramid 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
spike-spiegel/q-FrozenLake-v1-4x4-noSlippery
spike-spiegel
2023-07-07T11:42:41Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T11:42:39Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="spike-spiegel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ddoc/xag
ddoc
2023-07-07T11:42:20Z
0
0
null
[ "region:us" ]
null
2023-07-07T11:42:00Z
# Agent Scheduler Introducing AgentScheduler, an A1111/Vladmandic Stable Diffusion Web UI extension to power up your image generation workflow! ## Table of Content - [Compatibility](#compatibility) - [Installation](#installation) - [Using Vlad Fork](#using-vlads-webui-fork) - [Using the built-in extension list](#using-the-built-in-extension-list) - [Manual clone](#manual-clone) - [Functionality](#functionality-as-of-current-version) - [Settings](#extension-settings) - [API Access](#api-access) - [Troubleshooting](#troubleshooting) - [Road Map](#road-map) - [Contributing](#contributing) - [License](#license) - [Disclaimer](#disclaimer) --- ## Compatibility This version of AgentScheduler is compatible with latest versions of: - A1111: [commit baf6946](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/baf6946e06249c5af9851c60171692c44ef633e0) - Vladmandic: [commit 9726b4d](https://github.com/vladmandic/automatic/commit/9726b4d23cb63779964e1d4edff49dd2c9c11e51) > Older versions may not working properly. ## Installation ### Using Vlad's WebUI Fork The extension is already included in [Vlad fork](https://github.com/vladmandic/automatic)'s builtin extensions. ### Using the built-in extension list 1. Open the Extensions tab 2. Open the "Install From URL" sub-tab 3. Paste the repo url: https://github.com/ArtVentureX/sd-webui-agent-scheduler.git 4. Click "Install" ![Install](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/f0fa740b-392a-4dd6-abe1-49c770ea60da) ### Manual clone ```bash git clone "https://github.com/ArtVentureX/sd-webui-agent-scheduler.git" extensions/agent-scheduler ``` (The second argument specifies the name of the folder, you can choose whatever you like). ## Functionality [as of current version] ![Extension Walkthrough 1](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/a5a039a7-d98b-4186-9131-6775f0812c39) 1️⃣ Input your usual Prompts & Settings. **Enqueue** to send your current prompts, settings, controlnets to **AgentScheduler**. ![Extension Walkthrough 2](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/734176b4-7ee3-40e5-bb92-35608fabfc4b) 2️⃣ **AgentScheduler** Extension Tab. 3️⃣ See all queued tasks, current image being generated and tasks' associated information. **Drag and drop** the handle in the begining of each row to reaggrange the generation order. 4️⃣ **Pause** to stop queue auto generation. **Resume** to start. 5️⃣ Press ▶️ to prioritize selected task, or to start a single task when queue is paused. **Delete** tasks that you no longer want. ![ Extension Walkthrough 3](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/23109761-2633-4b24-bbb3-091628367047) 6️⃣ Show queue history. 7️⃣ **Filter** task status or search by text. 8️⃣ **Bookmark** task to easier filtering. 9️⃣ Double click the task id to **rename**. Click ↩️ to **Requeue** old task. 🔟 Click on each task to **view** the generation results. https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/50c74922-b85f-493c-9be8-b8e78f0cd061 ## Extension Settings Go to `Settings > Agent Scheduler` to access extension settings. ![Settings](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/b0377ccd-f9bf-486e-8393-c06fe26aa117) **Disable Queue Auto-Processing**: Check this option to disable queue auto-processing on start-up. You can also temporarily pause or resume the queue from the Extension tab. **Queue Button Placement**: Change the placement of the queue button on the UI. **Hide the Checkpoint Dropdown**: The Extension provides a custom checkpoint dropdown. ![Custom Checkpoint](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/d110d314-a208-4eec-bb54-9f8c73cb450b) By default, queued tasks use the currently loaded checkpoint. However, changing the system checkpoint requires some time to load the checkpoint into memory, and you also cannot change the checkpoint during image generation. You can use this dropdown to quickly queue a task with a custom checkpoint. **Auto Delete Queue History**: Select a timeframe to keep your queue history. Tasks that are older than the configured value will be automatically deleted. Please note that bookmarked tasks will not be deleted. ## API Access All the functionality of this extension can be accessed through HTTP APIs. You can access the API documentation via `http://127.0.0.1:7860/docs`. Remember to include `--api` in your startup arguments. ![API docs](https://github.com/ArtVentureX/sd-webui-agent-scheduler/assets/133728487/012ab2cc-b41f-4c68-8fa5-7ab4e49aa91d) #### Queue Task The two apis `/agent-scheduler/v1/queue/txt2img` and `/agent-scheduler/v1/queue/img2img` support all the parameters of the original webui apis. These apis response the task id, which can be used to perform updates later. ```json { "task_id": "string" } ``` #### Download Results Use api `/agent-scheduler/v1/results/{id}` to get the generated images. The api supports two response format: - json with base64 encoded ```json { "success": true, "data": [ { "image": "data:image/png;base64,iVBORw0KGgoAAAAN...", "infotext": "1girl\nNegative prompt: EasyNegative, badhandv4..." }, { "image": "data:image/png;base64,iVBORw0KGgoAAAAN...", "infotext": "1girl\nNegative prompt: EasyNegative, badhandv4..." } ] } ``` - zip file with querystring `zip=true` ## Troubleshooting Make sure that you are running the latest version of the extension and an updated version of the WebUI. - To update the extension, go to `Extension` tab and click `Check for Updates`, then click `Apply and restart UI`. - To update the WebUI it self, you run the command `git pull origin master` in the same folder as webui.bat (or webui.sh). Steps to try to find the cause of issues: - Check the for errors in the WebUI output console. - Press F12 in the browser then go to the console tab and reload the page, find any error message here. Common errors: **AttributeError: module 'modules.script_callbacks' has no attribute 'on_before_reload'** If you see this error message in the output console, try update the WebUI to the latest version. **ReferenceError: submit_enqueue is not defined** If you click the `Enqueue` button and nothing happen, and you find above error message in the browser F12 console, follow the steps in [this comment](https://github.com/ArtVentureX/sd-webui-agent-scheduler/issues/4#issuecomment-1575986274). For other errors, feel free to fire a new [Github issue](https://github.com/ArtVentureX/sd-webui-agent-scheduler/issues/new/choose). ## Road Map To list possible feature upgrades for this extension - Connect multiple SD webui nodes to run task. - Sync with GenAI Management Platform **ArtVenture** ## Contributing We welcome contributions to the Agent Scheduler Extension project! Please feel free to submit issues, bug reports, and feature requests through the GitHub repository. Please give us a ⭐ if you find this extension helpful! ## License This project is licensed under the Apache License 2.0. ## Disclaimer The author(s) of this project are not responsible for any damages or legal issues arising from the use of this software. Users are solely responsible for ensuring that they comply with any applicable laws and regulations when using this software and assume all risks associated with its use. The author(s) are not responsible for any copyright violations or legal issues arising from the use of input or output content. --- ## CRAFTED BY THE PEOPLE BUILDING **ARTVENTURE**, [**ATHERLABS**](https://atherlabs.com/) & [**SIPHER ODYSSEY**](http://playsipher.com/) ### About ArtVenture (coming soon™️) ArtVenture offers powerful collaboration features for Generative AI Image workflows. It is designed to help designers and creative professionals of all levels collaborate more efficiently, unleash their creativity, and have full transparency and tracking over the creation process. ![ArtVenture Teaser](https://user-images.githubusercontent.com/90659883/236376930-831ac345-e979-4ec5-bece-49e4bc497b79.png) ![ArtVenture Teaser 2](https://user-images.githubusercontent.com/90659883/236376933-babe9d36-f42f-4c1c-b59a-08be572a1f4c.png) ### Current Features ArtVenture offers the following key features: - Seamless Access: available on desktop and mobile - Multiplayer & Collaborative UX. Strong collaboration features, such as real-time commenting and feedback, version control, and image/file/project sharing. - Powerful semantic search capabilities. - Building on shoulders of Giants, leveraging A1111/Vladnmandic and other pioneers, provide collaboration process from Idea (Sketch/Thoughts/Business Request) to Final Results(Images/Copywriting Post/TaskCompleted) in 1 platform - Automation tooling for certain repeated tasks - Secure and transparent, leveraging hasing and metadata to track the origin and history of models, loras, images to allow for tracability and ease of collaboration. - Personalize UX for both beginner and experienced users to quickly remix existing SD images by editing prompts and negative prompts, selecting new training models and output quality as desired. ### Target Audience ArtVenture is designed for the following target audiences: - Casual Creators - Small Design Teams or Freelancers - Design Agencies & Studios ## 🎉 Stay Tuned for Updates We hope you find this extension to be useful. We will be adding new features and improvements over time as we enhance this extension to support our creative workflows. To stay up-to-date with the latest news and updates, be sure to follow us on GitHub and Twitter (coming soon™️). We welcome your feedback and suggestions, and are excited to hear how AgentScheduler can help you streamline your workflow and unleash your creativity!
kupru/dqn-SpaceInvadersNoFrameskip-v4
kupru
2023-07-07T11:34:01Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T11:33:22Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 567.50 +/- 361.12 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kupru -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kupru -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kupru ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
xian79/ml-agents-Pyramids-v0
xian79
2023-07-07T11:30:05Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-07T11:30:04Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: xian79/ml-agents-Pyramids-v0 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
parsi-ai-nlpclass/G5_HW4_sentiment_part2
parsi-ai-nlpclass
2023-07-07T11:18:13Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T11:09:48Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: output2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output2 This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Graphcore/bert-base-uncased-squad
Graphcore
2023-07-07T11:10:14Z
6
1
transformers
[ "transformers", "pytorch", "optimum_graphcore", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-17T16:17:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: Graphcore/bert-base-uncased-squad results: [] --- # Graphcore/bert-base-uncased-squad Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model is a fine-tuned version of [Graphcore/bert-base-uncased](https://huggingface.co/Graphcore/bert-base-uncased) on the squad dataset. ## Training and evaluation data Trained on squad dataset: - [HuggingFace/squad](https://huggingface.co/datasets/squad) ## Training procedure Model was trained on 16 Graphcore Mk2 IPUs using the [optimum-graphcore](https://github.com/huggingface/optimum-graphcore) library. Command line: ``` python examples/question-answering/run_qa.py \ --model_name_or_path Graphcore/bert-base-uncased \ --ipu_config_name Graphcore/bert-base-ipu \ --dataset_name squad \ --do_train \ --do_eval \ --num_train_epochs 3 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 16 \ --pod_type pod16 \ --learning_rate 9e-5 \ --max_seq_length 384 \ --doc_stride 128 \ --seed 42\ --lr_scheduler_type linear \ --loss_scaling 64 \ --weight_decay 0.01 \ --warmup_ratio 0.2 \ --logging_steps 1 \ --save_steps 50 \ --dataloader_num_workers 64 \ --ipu_config_overrides "embedding_serialization_factor=2" \ --output_dir squad_v2_bert_base \ --overwrite_output_dir ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 - training precision: Mixed Precision ### Training results ``` { "epoch": 3.0, "eval_exact_match": 81.79754020813624, "eval_f1": 88.84840994541061, "eval_samples": 10784 } ``` ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cpu - Datasets 1.18.4 - Tokenizers 0.11.6
Graphcore/gpt2-medium-wikitext-103
Graphcore
2023-07-07T11:07:28Z
7
1
transformers
[ "transformers", "pytorch", "optimum_graphcore", "gpt2", "text-generation", "generated_from_trainer", "dataset:wikitext", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-23T16:30:12Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikitext model-index: - name: clm_output_medium results: [] --- # Graphcore/gpt2-medium-wikitext-103 Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) ## Intended uses & limitations This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the [wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset. It achieves the following results on the evaluation set: - Loss: 2.6973 ## Training and evaluation data Trained on wikipedia dataset: - [HuggingFace/wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset ## Training procedure Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore). Command line: ``` python examples/language-modeling/run_clm.py \ --model_name_or_path gpt2-medium \ --ipu_config_name Graphcore/gpt2-medium-ipu \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --do_train \ --do_eval \ --num_train_epochs 10 \ --dataloader_num_workers 64 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 256 \ --output_dir /tmp/clm_output_medium \ --logging_steps 5 \ --learning_rate 1e-5 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.1 \ --ipu_config_overrides="embedding_serialization_factor=5,inference_device_iterations=9,replication_factor=2,inference_replication_factor=2,ipus_per_replica=8,layers_per_ipu=[0 3 3 3 3 4 4 4],matmul_proportion=0.25" \ --dataloader_drop_last \ --pod_type pod16 ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 256 - total_train_batch_size: 1024 - total_eval_batch_size: 18 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - training precision: Mixed Precision ### Training results ``` ***** train metrics ***** "epoch": 10.0, "train_loss": 2.8070910754504506, "train_runtime": 11217.8167, "train_samples": 114248, "train_samples_per_second": 101.845, "train_steps_per_second": 0.099 ***** eval metrics ***** "eval_loss": 2.697265625, "eval_samples": 240, "perplexity": 14.83910053420958 ``` ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
Graphcore/hubert-base-ipu
Graphcore
2023-07-07T11:06:29Z
2
0
null
[ "optimum_graphcore", "arxiv:2106.07447", "license:apache-2.0", "region:us" ]
null
2022-03-07T13:23:26Z
--- license: apache-2.0 --- # Graphcore/roberta-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description HUBERT (Hidden-Unit BERT) is a BERT-based model for self-supervised speech representation learning approach that relies on predicting K-means cluster assignments of masked segments of continuous output. This approach forces the model to learn a combined acoustic and language model over the continuous inputs by applying the prediction loss over the masked region. Paper link : [Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Unit](https://arxiv.org/pdf/2106.07447v1.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the HuBERT-base model (e.g. [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/hubert-base-ipu") ```
Graphcore/gpt2-medium-ipu
Graphcore
2023-07-07T11:05:57Z
2
0
null
[ "optimum_graphcore", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: apache-2.0 --- # Graphcore/gpt2-medium-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [HuggingFace/gpt2-medium](https://huggingface.co/gpt2-medium) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/gpt2-medium-ipu") ```
Graphcore/bart-base-ipu
Graphcore
2023-07-07T11:05:40Z
3
1
null
[ "optimum_graphcore", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: apache-2.0 --- # Graphcore/bart-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations This model contains just the `IPUConfig` files for running the BART base model (e.g. [facebook/bart-base](https://huggingface.co/facebook/bart-base)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/bart-base-ipu") ```
CleverShovel/rubert-tiny2-tnved-v3
CleverShovel
2023-07-07T11:03:53Z
51
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-06-10T15:49:20Z
--- license: mit --- Предсказание товарной позиции кода ТН ВЭД (первые 4 цифры), покрыты все товарные позиции. Обучалось на данных с хакатона [aihack](https://www.kaggle.com/datasets/mikhailkostin/aihack-ved) + часть данных была сгенерирована с помощью [FRED T5 instructor](https://huggingface.co/Den4ikAI/FRED-T5-XL_instructor). Если есть ещё данные по ТН ВЭД, то напишите, пожалуйста, мне в [телеграм](https://t.me/clevershovel), спасибо.
Graphcore/convnext-base-ipu
Graphcore
2023-07-07T11:03:51Z
6
0
null
[ "optimum_graphcore", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
null
2022-06-22T16:30:43Z
--- license: apache-2.0 --- # Graphcore/convnext-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description Paper link : [A ConvNet for the 2020s](https://arxiv.org/pdf/2201.03545.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/convnext-base-ipu") ```
CleverShovel/rubert-tiny2-tnved-v4
CleverShovel
2023-07-07T11:03:37Z
48
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-06-20T14:13:23Z
--- license: mit --- Предсказание полного кода ТН ВЭД, в конфиге указано, какие коды предсказываются, покрыта только часть кодов, в датасете были примеры для 3523 уникальных кодов, но из них 865 кодов имели только один класс, которые не были включены в обучающую выборку, поэтому модель видела данные только для 2658 кодов. Обучалось на предварительных классификационных решениях ЕАК, которые можно найти [здесь](https://customs.gov.ru/uchastnikam-ved/informacziya-o-klassifikaczii-i-proisxozhdenii-tovara/dejstvuyushhie-predvaritel-nye-resheniya-o-klassifikaczii-tovarov) и [здесь](https://customs.gov.ru/opendata/77301176610-prereoklasstov). Если есть ещё данные по ТН ВЭД, то напишите, пожалуйста, мне в [телеграм](https://t.me/clevershovel), спасибо.
Graphcore/bert-base-ipu
Graphcore
2023-07-07T11:03:34Z
2
1
null
[ "optimum_graphcore", "region:us" ]
null
2022-03-02T23:29:04Z
# Graphcore/bert-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model contains just the `IPUConfig` files for running the BERT base model (e.g. [bert-base-uncased](https://huggingface.co/bert-base-uncased) or [bert-base-cased](https://huggingface.co/bert-base-cased)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/bert-base-ipu") ```
Graphcore/deberta-base-ipu
Graphcore
2023-07-07T11:03:16Z
8
0
null
[ "optimum_graphcore", "arxiv:2006.03654", "region:us" ]
null
2022-03-02T23:29:04Z
# Graphcore/deberta-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description DeBERTa([Decoding-enhanced BERT with Disentangled Attention ](https://arxiv.org/abs/2006.03654 )) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining. Through two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks. # Intended uses & limitations This model contains just the `IPUConfig` files for running the DeBERTa-base model (e.g. [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/deberta-base-ipu") ```
Graphcore/distilbert-base-ipu
Graphcore
2023-07-07T11:02:52Z
5
0
null
[ "optimum_graphcore", "arxiv:1910.01108", "license:apache-2.0", "region:us" ]
null
2022-09-13T11:53:13Z
--- license: apache-2.0 --- # Graphcore/distilbert-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description DistilBERT is a distilled version of BERT introduced in [this paper](https://arxiv.org/abs/1910.01108). ## Intended uses & limitations This model contains just the `IPUConfig` files for running the DistilBERT base model (e.g. [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/distilbert-base-ipu") ```
Graphcore/wav2vec2-ctc-large-ipu
Graphcore
2023-07-07T11:01:22Z
2
0
null
[ "optimum_graphcore", "arxiv:2006.11477", "license:apache-2.0", "region:us" ]
null
2023-04-11T19:14:42Z
--- license: apache-2.0 --- # Graphcore/wav2vec2-ctc-large-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description From [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/pdf/2006.11477v3.pdf), “Wave2vec2 is a framework for self-supervised learning of speech representations. It masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned.” ## Intended uses & limitations This model contains just the `IPUConfig` files for running the Wav2Vec2ForCTC large model (e.g. [wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/wav2vec2-ctc-large-ipu") ```
Graphcore/deberta-base-squad
Graphcore
2023-07-07T11:00:02Z
9
1
transformers
[ "transformers", "pytorch", "tensorboard", "optimum_graphcore", "deberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-04-06T15:38:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: deberta-base-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-squad This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 1984 - distributed_type: IPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 2.0 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cpu - Datasets 2.3.3.dev0 - Tokenizers 0.12.1
idealflaw/q-FrozenLake-v1-4x4-noSlippery
idealflaw
2023-07-07T10:52:17Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T10:34:42Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="idealflaw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
arunboss/test_triage
arunboss
2023-07-07T10:36:19Z
213
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:arunboss/test", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-06T06:51:33Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_triage results: [] datasets: - arunboss/test --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_triage This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the Test dataset. It achieves the following results on the evaluation set: - Loss: 1.9758 - Accuracy: 0.5008 ## Model description This is a basic skin disease recognition model without the specific disease information for now. I just wanted to test the platform for hosting capabilities and check other features. ## Intended uses & limitations For now, its just a test environment. We have the basic pipeline of data & processing in place to push to this place. Future use will be to open source the dataset and allow the community to fine tune the skin identification and triaging module for broader and free-for-all in commercial/non-commercial usage. ## Training and evaluation data We have a lot of open & closed datasets that have been compiled over years and annotated. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.3471 | 1.0 | 151 | 3.2152 | 0.2452 | | 2.7313 | 2.0 | 303 | 2.5291 | 0.3817 | | 2.48 | 3.0 | 454 | 2.2459 | 0.4413 | | 2.2192 | 4.0 | 606 | 2.0968 | 0.4702 | | 2.0479 | 5.0 | 757 | 2.0026 | 0.4897 | | 1.9702 | 5.98 | 906 | 1.9758 | 0.5008 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Chickenfish/MonicaA
Chickenfish
2023-07-07T10:29:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T10:28:02Z
--- license: creativeml-openrail-m ---
Binaryy/xlm-roberta-large-finetuned-cola
Binaryy
2023-07-07T10:20:49Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-30T09:19:37Z
--- license: mit tags: - generated_from_trainer metrics: - matthews_correlation model-index: - name: xlm-roberta-large-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-cola This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1456 - Matthews Correlation: 0.9419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4465 | 1.0 | 606 | 0.4478 | 0.5033 | | 0.364 | 2.0 | 1212 | 0.2318 | 0.8500 | | 0.2294 | 3.0 | 1818 | 0.1767 | 0.9045 | | 0.16 | 4.0 | 2424 | 0.1353 | 0.9343 | | 0.0739 | 5.0 | 3030 | 0.1456 | 0.9419 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
nomsgadded/textual_inversion_shark
nomsgadded
2023-07-07T10:01:05Z
36
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T08:40:14Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - nomsgadded/textual_inversion_shark These are textual inversion adaption weights for CompVis/stable-diffusion-v1-4. You can find some example images in the following.
mankra/mini_text_classification_finetune_model
mankra
2023-07-07T09:58:54Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T07:11:03Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: mini_text_classification_finetune_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini_text_classification_finetune_model This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2095 - Accuracy: 0.3333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 1.3045 | 0.3333 | | No log | 2.0 | 2 | 1.2998 | 0.3333 | | No log | 3.0 | 3 | 1.2947 | 0.3333 | | No log | 4.0 | 4 | 1.2899 | 0.3333 | | No log | 5.0 | 5 | 1.2851 | 0.3333 | | No log | 6.0 | 6 | 1.2809 | 0.3333 | | No log | 7.0 | 7 | 1.2766 | 0.3333 | | No log | 8.0 | 8 | 1.2721 | 0.3333 | | No log | 9.0 | 9 | 1.2684 | 0.3333 | | No log | 10.0 | 10 | 1.2645 | 0.3333 | | No log | 11.0 | 11 | 1.2607 | 0.3333 | | No log | 12.0 | 12 | 1.2567 | 0.3333 | | No log | 13.0 | 13 | 1.2528 | 0.3333 | | No log | 14.0 | 14 | 1.2490 | 0.3333 | | No log | 15.0 | 15 | 1.2451 | 0.3333 | | No log | 16.0 | 16 | 1.2413 | 0.3333 | | No log | 17.0 | 17 | 1.2377 | 0.3333 | | No log | 18.0 | 18 | 1.2342 | 0.3333 | | No log | 19.0 | 19 | 1.2307 | 0.3333 | | No log | 20.0 | 20 | 1.2275 | 0.3333 | | No log | 21.0 | 21 | 1.2244 | 0.3333 | | No log | 22.0 | 22 | 1.2215 | 0.3333 | | No log | 23.0 | 23 | 1.2190 | 0.3333 | | No log | 24.0 | 24 | 1.2167 | 0.3333 | | No log | 25.0 | 25 | 1.2147 | 0.3333 | | No log | 26.0 | 26 | 1.2130 | 0.3333 | | No log | 27.0 | 27 | 1.2116 | 0.3333 | | No log | 28.0 | 28 | 1.2105 | 0.3333 | | No log | 29.0 | 29 | 1.2098 | 0.3333 | | No log | 30.0 | 30 | 1.2095 | 0.3333 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
sinny/ppo-pyramids
sinny
2023-07-07T09:54:48Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-07T09:34:27Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: sinny/ppo-pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went
jordyvl
2023-07-07T09:52:44Z
103
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T07:43:27Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0783 - Accuracy: 0.71 - Exit 0 Accuracy: 0.115 - Exit 1 Accuracy: 0.1575 - Exit 2 Accuracy: 0.185 - Exit 3 Accuracy: 0.0875 - Exit 4 Accuracy: 0.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 288 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.72 | 2 | 2.7602 | 0.1125 | 0.0925 | 0.0675 | 0.0875 | 0.0625 | 0.0625 | | No log | 1.72 | 4 | 2.7309 | 0.115 | 0.1175 | 0.0675 | 0.1075 | 0.0625 | 0.0625 | | No log | 2.72 | 6 | 2.6967 | 0.1325 | 0.095 | 0.06 | 0.1175 | 0.0625 | 0.0625 | | No log | 3.72 | 8 | 2.6631 | 0.17 | 0.085 | 0.0575 | 0.1275 | 0.0625 | 0.0625 | | No log | 4.72 | 10 | 2.6242 | 0.205 | 0.085 | 0.0575 | 0.1225 | 0.0625 | 0.0625 | | No log | 5.72 | 12 | 2.5736 | 0.2175 | 0.0875 | 0.0825 | 0.12 | 0.0625 | 0.0625 | | No log | 6.72 | 14 | 2.5410 | 0.215 | 0.09 | 0.08 | 0.12 | 0.0625 | 0.0625 | | No log | 7.72 | 16 | 2.5229 | 0.2325 | 0.1 | 0.0925 | 0.13 | 0.0625 | 0.0625 | | No log | 8.72 | 18 | 2.4841 | 0.2525 | 0.1 | 0.1 | 0.1325 | 0.0625 | 0.0625 | | No log | 9.72 | 20 | 2.4382 | 0.29 | 0.1 | 0.1025 | 0.1325 | 0.0625 | 0.0625 | | No log | 10.72 | 22 | 2.3823 | 0.3 | 0.1 | 0.1275 | 0.1325 | 0.0625 | 0.0625 | | No log | 11.72 | 24 | 2.3389 | 0.3275 | 0.1 | 0.1175 | 0.1225 | 0.0625 | 0.0625 | | No log | 12.72 | 26 | 2.3002 | 0.35 | 0.0975 | 0.12 | 0.1225 | 0.0625 | 0.0625 | | No log | 13.72 | 28 | 2.2421 | 0.36 | 0.0975 | 0.125 | 0.1275 | 0.0625 | 0.0625 | | No log | 14.72 | 30 | 2.2026 | 0.3575 | 0.1025 | 0.13 | 0.125 | 0.0625 | 0.0625 | | No log | 15.72 | 32 | 2.1712 | 0.375 | 0.105 | 0.1375 | 0.125 | 0.0625 | 0.0625 | | No log | 16.72 | 34 | 2.0999 | 0.4075 | 0.1 | 0.145 | 0.125 | 0.0625 | 0.0625 | | No log | 17.72 | 36 | 2.0414 | 0.4225 | 0.1025 | 0.145 | 0.1275 | 0.0625 | 0.0625 | | No log | 18.72 | 38 | 1.9981 | 0.4375 | 0.0975 | 0.1425 | 0.13 | 0.0625 | 0.0625 | | No log | 19.72 | 40 | 1.9369 | 0.4575 | 0.1025 | 0.14 | 0.1425 | 0.0625 | 0.0625 | | No log | 20.72 | 42 | 1.8903 | 0.4975 | 0.1025 | 0.14 | 0.145 | 0.0625 | 0.0625 | | No log | 21.72 | 44 | 1.8242 | 0.525 | 0.1025 | 0.1425 | 0.15 | 0.0625 | 0.0625 | | No log | 22.72 | 46 | 1.7520 | 0.5325 | 0.11 | 0.1475 | 0.1475 | 0.0625 | 0.0625 | | No log | 23.72 | 48 | 1.7203 | 0.5525 | 0.1125 | 0.1475 | 0.1525 | 0.0625 | 0.0625 | | No log | 24.72 | 50 | 1.6753 | 0.565 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 25.72 | 52 | 1.6245 | 0.575 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 26.72 | 54 | 1.5832 | 0.61 | 0.11 | 0.15 | 0.1525 | 0.0625 | 0.0625 | | No log | 27.72 | 56 | 1.5404 | 0.61 | 0.11 | 0.1475 | 0.155 | 0.0625 | 0.0625 | | No log | 28.72 | 58 | 1.4958 | 0.6125 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 | | No log | 29.72 | 60 | 1.4613 | 0.6325 | 0.11 | 0.1475 | 0.1575 | 0.0625 | 0.0625 | | No log | 30.72 | 62 | 1.4479 | 0.63 | 0.11 | 0.1525 | 0.16 | 0.0625 | 0.0625 | | No log | 31.72 | 64 | 1.4101 | 0.64 | 0.1125 | 0.1525 | 0.165 | 0.0625 | 0.0625 | | No log | 32.72 | 66 | 1.3699 | 0.655 | 0.1125 | 0.1525 | 0.1675 | 0.0625 | 0.0625 | | No log | 33.72 | 68 | 1.3427 | 0.6725 | 0.115 | 0.1525 | 0.165 | 0.0625 | 0.0625 | | No log | 34.72 | 70 | 1.3161 | 0.6825 | 0.115 | 0.1525 | 0.1625 | 0.0625 | 0.0625 | | No log | 35.72 | 72 | 1.2896 | 0.7025 | 0.115 | 0.1525 | 0.1675 | 0.0625 | 0.0625 | | No log | 36.72 | 74 | 1.2720 | 0.705 | 0.11 | 0.1525 | 0.185 | 0.0625 | 0.0625 | | No log | 37.72 | 76 | 1.2471 | 0.71 | 0.11 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 38.72 | 78 | 1.2307 | 0.71 | 0.11 | 0.155 | 0.1775 | 0.0625 | 0.0625 | | No log | 39.72 | 80 | 1.2174 | 0.7175 | 0.1125 | 0.155 | 0.1825 | 0.0625 | 0.0625 | | No log | 40.72 | 82 | 1.1991 | 0.705 | 0.1125 | 0.1525 | 0.1775 | 0.0625 | 0.0625 | | No log | 41.72 | 84 | 1.1867 | 0.71 | 0.1175 | 0.1525 | 0.18 | 0.065 | 0.0625 | | No log | 42.72 | 86 | 1.1764 | 0.7025 | 0.115 | 0.1525 | 0.18 | 0.0675 | 0.0625 | | No log | 43.72 | 88 | 1.1601 | 0.715 | 0.115 | 0.1525 | 0.1825 | 0.0725 | 0.0625 | | No log | 44.72 | 90 | 1.1410 | 0.7175 | 0.115 | 0.1525 | 0.18 | 0.075 | 0.0625 | | No log | 45.72 | 92 | 1.1408 | 0.71 | 0.115 | 0.155 | 0.1825 | 0.075 | 0.0625 | | No log | 46.72 | 94 | 1.1443 | 0.7075 | 0.115 | 0.155 | 0.1825 | 0.0775 | 0.0625 | | No log | 47.72 | 96 | 1.1364 | 0.705 | 0.115 | 0.155 | 0.1775 | 0.0825 | 0.0625 | | No log | 48.72 | 98 | 1.1251 | 0.71 | 0.115 | 0.155 | 0.175 | 0.085 | 0.0625 | | No log | 49.72 | 100 | 1.1113 | 0.7175 | 0.115 | 0.155 | 0.1775 | 0.085 | 0.0625 | | No log | 50.72 | 102 | 1.1040 | 0.7175 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 | | No log | 51.72 | 104 | 1.0972 | 0.715 | 0.115 | 0.155 | 0.18 | 0.0875 | 0.0625 | | No log | 52.72 | 106 | 1.0938 | 0.7175 | 0.115 | 0.1575 | 0.1825 | 0.0875 | 0.0625 | | No log | 53.72 | 108 | 1.0931 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | | No log | 54.72 | 110 | 1.0887 | 0.7075 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | | No log | 55.72 | 112 | 1.0865 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 56.72 | 114 | 1.0828 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 57.72 | 116 | 1.0801 | 0.7075 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 58.72 | 118 | 1.0786 | 0.7125 | 0.115 | 0.1575 | 0.1875 | 0.0875 | 0.0625 | | No log | 59.72 | 120 | 1.0783 | 0.71 | 0.115 | 0.1575 | 0.185 | 0.0875 | 0.0625 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
hafeezmhk6/mt5-base-ver6.15
hafeezmhk6
2023-07-07T09:50:03Z
48
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T09:48:16Z
--- metrics: - bleu - character - chrf pipeline_tag: text-classification ---
metafresh89/qr-code
metafresh89
2023-07-07T09:48:46Z
16
4
diffusers
[ "diffusers", "safetensors", "ctrl", "stable-diffusion", "controlnet", "image-to-image", "en", "license:openrail++", "endpoints_compatible", "region:us" ]
image-to-image
2023-07-07T09:24:46Z
--- tags: - stable-diffusion - controlnet - image-to-image license: openrail++ language: - en library_name: diffusers pipeline_tag: image-to-image duplicated_from: DionTimmer/controlnet_qrcode-control_v1p_sd15 --- # QR Code Conditioned ControlNet Models for Stable Diffusion 1.5 ![1](https://www.dropbox.com/s/fxyuqpot2z2ftty/5.png?raw=1) ## Model Description This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v1.5. The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, this 1.5 version model was also trained on the same dataset for those who are using the older version. ## How to use with Diffusers ```bash pip -q install diffusers transformers accelerate torch xformers ``` ```python import torch from PIL import Image from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler from diffusers.utils import load_image controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v1p_sd15", torch_dtype=torch.float16) pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 ) pipe.enable_xformers_memory_efficient_attention() pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() def resize_for_condition_image(input_image: Image, resolution: int): input_image = input_image.convert("RGB") W, H = input_image.size k = float(resolution) / min(H, W) H *= k W *= k H = int(round(H / 64.0)) * 64 W = int(round(W / 64.0)) * 64 img = input_image.resize((W, H), resample=Image.LANCZOS) return img # play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image # qr code image source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png") # initial image, anything init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg") condition_image = resize_for_condition_image(source_image, 768) init_image = resize_for_condition_image(init_image, 768) generator = torch.manual_seed(123121231) image = pipe(prompt="a bilboard in NYC with a qrcode", negative_prompt="ugly, disfigured, low quality, blurry, nsfw", image=init_image, control_image=condition_image, width=768, height=768, guidance_scale=20, controlnet_conditioning_scale=1.5, generator=generator, strength=0.9, num_inference_steps=150, ) image.images[0] ``` ## Performance and Limitations These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).** To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork. ## Installation The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application. For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail. Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
Arup-Dutta-Bappy/bert-large-uncased-finetuned-squad
Arup-Dutta-Bappy
2023-07-07T09:42:01Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-04T10:31:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-large-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-squad This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
KINGeorge2000/sentiment_roberta_yu
KINGeorge2000
2023-07-07T09:31:20Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-23T05:49:16Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sentiment_roberta_yu results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_roberta_yu This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2580 - Accuracy: 0.6668 - F1: 0.6668 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
RHP27042002/AI_NFT_generator
RHP27042002
2023-07-07T09:23:52Z
0
0
adapter-transformers
[ "adapter-transformers", "code", "text-generation", "en", "dataset:OpenAssistant/oasst1", "license:mit", "region:us" ]
text-generation
2023-07-07T09:09:28Z
--- license: mit datasets: - OpenAssistant/oasst1 language: - en metrics: - character pipeline_tag: text-generation tags: - code library_name: adapter-transformers --- // SPDX-License-Identifier: UNLICENSED pragma solidity ^0.8.0; import "@openzeppelin/contracts/utils/Counters.sol"; import "@openzeppelin/contracts/token/ERC721/ERC721.sol"; import "@openzeppelin/contracts/token/ERC721/extensions/ERC721URIStorage.sol"; contract NFT is ERC721URIStorage { using Counters for Counters.Counter; Counters.Counter private _tokenIds; address public owner; uint256 public cost; constructor( string memory _name, string memory _symbol, uint256 _cost ) ERC721(_name, _symbol) { owner = msg.sender; cost = _cost; } function mint(string memory tokenURI) public payable { require(msg.value >= cost); _tokenIds.increment(); uint256 newItemId = _tokenIds.current(); _mint(msg.sender, newItemId); _setTokenURI(newItemId, tokenURI); } function totalSupply() public view returns (uint256) { return _tokenIds.current(); } function withdraw() public { require(msg.sender == owner); (bool success, ) = owner.call{value: address(this).balance}(""); require(success); } }
arc-r/faster-whisper-large-zh-cv11
arc-r
2023-07-07T09:23:49Z
4
7
ctranslate2
[ "ctranslate2", "audio", "automatic-speech-recognition", "zh", "region:us" ]
automatic-speech-recognition
2023-07-07T06:31:12Z
--- language: - zh tags: - audio - automatic-speech-recognition library_name: ctranslate2 --- # whisper-large-zh-cv11 model for CTranslate2 This repository contains the conversion of [jonatasgrosman/whisper-large-zh-cv11](https://huggingface.co/jonatasgrosman/whisper-large-zh-cv11) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper). ## Example ```python from faster_whisper import WhisperModel model = WhisperModel("arc-r/whisper-large-zh-cv11") segments, info = model.transcribe("audio.mp3") for segment in segments: print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) ``` ## Conversion details The original model was converted with the following command: ``` ct2-transformers-converter --model jonatasgrosman/whisper-large-zh-cv11 --output_dir faster-whisper-large-zh-cv11 \ --quantization float16 ``` Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). ## More information **For more information about the original model, see its [model card](https://huggingface.co/jonatasgrosman/whisper-large-zh-cv11).**
Harjas123/my_awesome_model
Harjas123
2023-07-07T09:21:46Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T07:57:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6268 - Accuracy: 0.5833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5998 | 1.0 | 6 | 0.7076 | 0.5417 | | 0.5384 | 2.0 | 12 | 0.6929 | 0.5417 | | 0.4793 | 3.0 | 18 | 0.6850 | 0.5417 | | 0.4221 | 4.0 | 24 | 0.6849 | 0.5417 | | 0.3747 | 5.0 | 30 | 0.6591 | 0.5417 | | 0.3214 | 6.0 | 36 | 0.6371 | 0.5833 | | 0.2857 | 7.0 | 42 | 0.6286 | 0.5833 | | 0.2549 | 8.0 | 48 | 0.6281 | 0.5833 | | 0.2333 | 9.0 | 54 | 0.6290 | 0.5833 | | 0.2196 | 10.0 | 60 | 0.6268 | 0.5833 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Uminosachi/realisticVisionV30_v30VAE-inpainting
Uminosachi
2023-07-07T09:15:20Z
35
2
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-03T23:54:35Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [realisticVisionV30_v30VAE-inpainting](https://civitai.com/models/4201?modelVersionId=105723).
Uminosachi/realisticVisionV20_v20-inpainting
Uminosachi
2023-07-07T09:11:11Z
48
1
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-01T12:01:31Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [realisticVisionV20_v20-inpainting](https://civitai.com/models/4201?modelVersionId=29461).
lxyuan/distilgpt2-finetuned-finance
lxyuan
2023-07-07T09:09:48Z
210
6
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "en", "dataset:causal-lm/finance", "dataset:gbharti/finance-alpaca", "dataset:PaulAdversarial/all_news_finance_sm_1h2023", "dataset:winddude/reddit_finance_43_250k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-06-29T03:27:54Z
--- tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-finance results: [] license: apache-2.0 datasets: - causal-lm/finance - gbharti/finance-alpaca - PaulAdversarial/all_news_finance_sm_1h2023 - winddude/reddit_finance_43_250k language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-finance This model is a fine-tuned version of distilgpt2 on the the combination of 4 different finance datasets: - [causal-lm/finance](https://huggingface.co/datasets/causal-lm/finance) - [gbharti/finance-alpaca](https://huggingface.co/datasets/gbharti/finance-alpaca) - [PaulAdversarial/all_news_finance_sm_1h2023](https://huggingface.co/datasets/PaulAdversarial/all_news_finance_sm_1h2023) - [winddude/reddit_finance_43_250k](https://huggingface.co/datasets/winddude/reddit_finance_43_250k) ## Training and evaluation data One can reproduce the dataset using the following code: ```python # load dataset dataset_1 = load_dataset("gbharti/finance-alpaca") dataset_2 = load_dataset("PaulAdversarial/all_news_finance_sm_1h2023") dataset_3 = load_dataset("winddude/reddit_finance_43_250k") dataset_4 = load_dataset("causal-lm/finance") # create a column called text dataset_1 = dataset_1.map( lambda example: {"text": example["instruction"] + " " + example["output"]}, num_proc=4, ) dataset_1 = dataset_1.remove_columns(["input", "instruction", "output"]) dataset_2 = dataset_2.map( lambda example: {"text": example["title"] + " " + example["description"]}, num_proc=4, ) dataset_2 = dataset_2.remove_columns( ["_id", "main_domain", "title", "description", "created_at"] ) dataset_3 = dataset_3.map( lambda example: { "text": example["title"] + " " + example["selftext"] + " " + example["body"] }, num_proc=4, ) dataset_3 = dataset_3.remove_columns( [ "id", "title", "selftext", "z_score", "normalized_score", "subreddit", "body", "comment_normalized_score", "combined_score", ] ) dataset_4 = dataset_4.map( lambda example: {"text": example["instruction"] + " " + example["output"]}, num_proc=4, ) dataset_4 = dataset_4.remove_columns(["input", "instruction", "output"]) # combine and split train test sets combined_dataset = concatenate_datasets( [ dataset_1["train"], dataset_2["train"], dataset_3["train"], dataset_4["train"], dataset_4["validation"], ] ) datasets = combined_dataset.train_test_split(test_size=0.2) ``` ## Inference example ```python from transformers import pipeline generator = pipeline(model="lxyuan/distilgpt2-finetuned-finance") generator("Tesla is", pad_token_id=generator.tokenizer.eos_token_id, max_new_tokens=200, num_return_sequences=2 ) >>> {'generated_text': 'Tesla is likely going to have a "market crash" over 20 years - I believe I\'m just not sure how this is going to affect the world. \n\nHowever, I would like to see this play out as a global financial crisis. With US interest rates already high, a crash in global real estate prices means that people are likely to feel pressure on assets that are less well served by the assets the US government gives them. \n\nWould these things help you in your retirement? I\'m fairly new to Wall Street, and it makes me think that you should have a bit more control over your assets (I’m not super involved in stock picking, but I’ve heard many times that governments can help their citizens), right? As another commenter has put it: there\'s something called a market crash that could occur in the second world country for most markets (I don\'t know how that would fit under US laws if I had done all of the above. \n\n' }, {'generated_text': "Tesla is on track to go from 1.46 to 1.79 per cent growth in Q3 (the fastest pace so far in the US), which will push down the share price.\n\nWhile the dividend could benefit Amazon’s growth, earnings also aren’t expected to be high at all, the company's annual earnings could be an indication that investors have a strong plan to boost sales by the end of the year if earnings season continues.\n\nThe latest financials showed earnings as of the end of July, followed by the earnings guidance from analysts at the Canadian Real Estate Association, which showed that Amazon’s revenues were up over $1.8 Trillion, which is a far cry from what was expected in early Q1.\n\nAmazon has grown the share price by as much as 1.6 percent since June 2020. Analysts had predicted that earnings growth in the stock would drop to 0.36 per cent for 2020, which would lead to Amazon’" } ``` ## Training procedure Notebook link: [here](https://github.com/LxYuan0420/nlp/blob/main/notebooks/finetune_distilgpt2_language_model_on_finance_dataset.ipynb) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Uminosachi/dreamshaper_5-inpainting
Uminosachi
2023-07-07T09:06:39Z
33
1
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-02T04:49:47Z
--- license: creativeml-openrail-m --- This is an inpainting model, which has been converted from the [DreamShaper 5-inpainting](https://civitai.com/models/4384?modelVersionId=51767).
digiplay/SoapMix2.5D_v2
digiplay
2023-07-07T09:04:51Z
285
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-20T08:41:36Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/29862?modelVersionId=39125 Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d0e364b4-3a53-4c8f-d248-3335dc23bd00/width=1024/00015-3123836998.jpeg)
erkam/sd-clevr-sg2layout-objects_cap-e2e
erkam
2023-07-07T09:01:02Z
3
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-2", "base_model:adapter:stabilityai/stable-diffusion-2", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-05T20:41:45Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - erkam/sd-clevr-sg2layout-objects_cap-e2e These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v4 dataset. You can find some example images in the following.
aroot/eng-mya-simcse_longest_ssrl
aroot
2023-07-07T08:48:40Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T08:27:24Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-mya-simcse_longest_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-simcse_longest_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8495 - Bleu: 4.1358 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
TheBloke/falcon-40b-sft-mix-1226-GGML
TheBloke
2023-07-07T08:45:12Z
5
11
transformers
[ "transformers", "falcon", "sft", "en", "de", "es", "fr", "dataset:OpenAssistant/oasst1", "dataset:databricks/databricks-dolly-15k", "license:apache-2.0", "region:us" ]
null
2023-07-04T23:32:03Z
--- license: apache-2.0 language: - en - de - es - fr tags: - sft inference: false datasets: - OpenAssistant/oasst1 - databricks/databricks-dolly-15k --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Open Assistant's Falcon 40B SFT MIX GGML These files are GGCC format model files for [Open Assistant's Falcon 40B SFT MIX](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226). These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp. GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp). Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon. ## Repositories available * [2, 3, 4, 5, 6, 8-bit GGCC models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-40b-sft-mix-1226-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226) ## Prompt template ``` <|prompter|>prompt<|endoftext|><|assistant|> ``` <!-- compatibility_ggml start --> ## Compatibility To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps: ``` git clone https://github.com/cmp-nct/ggllm.cpp cd ggllm.cpp rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release ``` Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.' Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example: ``` bin/falcon_main -t 8 -ngl 100 -b 1 -m falcon-40b-sft-mix-1226.ggccv1.q4_K.bin -p "<|prompter|>write a story about llamas<|endoftext|><|assistant|>" ``` You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used. Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have. `-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter. Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | falcon-40b-sft-mix-1226.ggccv1.q2_K.bin | q2_K | 2 | 13.74 GB | 16.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | falcon-40b-sft-mix-1226.ggccv1.q3_K.bin | q3_K_S | 3 | 17.98 GB | 20.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | falcon-40b-sft-mix-1226.ggccv1.q4_K.bin | q4_K_S | 4 | 23.54 GB | 26.04 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | falcon-40b-sft-mix-1226.ggccv1.q5_K.bin | q5_K_S | 5 | 28.77 GB | 31.27 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | falcon-40b-sft-mix-1226.ggccv1.q6_K.bin | q6_K | 6 | 34.33 GB | 36.83 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | falcon-40b-sft-mix-1226.ggccv1.q8_0.bin | q8_0 | 8 | 44.46 GB | 46.96 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Spiking Neurons AB, Kevin Schuppel, Cory Kujawski, senxiiz, Luke Pendergrass, John Villwock, Ghost , Alex , Sean Connelly, Space Cruiser, Eugene Pentland, Pyrater, Matthew Berman, Dave, Derek Yates, Jonathan Leane, Viktor Bowallius, Michael Levine, Joseph William Delisle, Fred von Graf, Asp the Wyvern, Nikolai Manek, Pierre Kircher, webtim, K, RoA, Karl Bernard, Artur Olbinski, Rainer Wilmers, Ai Maven, Nathan LeClaire, Ajan Kanaga, Stephen Murray, Edmond Seymore, zynix , Imad Khwaja, John Detwiler, Randy H, subjectnull, Alps Aficionado, Greatston Gnanesh, Trenton Dambrowitz, Junyu Yang, Raven Klaugh, biorpg, Deep Realms, vamX, Talal Aujan, Johann-Peter Hartmann, WelcomeToTheClub, Chris McCloskey, Luke, chris gileta, terasurfer , Iucharbius , Preetika Verma, Willem Michiel, Fen Risland, SuperWojo, Khalefa Al-Ahmad, Daniel P. Andersen, Gabriel Puliatti, Illia Dulskyi, Willian Hasse, Oscar Rangel, ya boyyy, Mano Prime, Lone Striker, Kalila Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Open Assistant's Falcon 40B SFT MIX # Open-Assistant Falcon 40B SFT MIX Model This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM. It was trained on a mixture of OASST top-2 threads (exported on June 2, 2023), Dolly-15k and synthetic instruction datasets (see dataset configuration below). ## Model Details - **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b) - **Model type:** Causal decoder-only transformer language model - **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); - **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_sampling_noprefix2.json), [multiligual-60](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-05_OpenAssistant_falcon-40b-sft-mix-1226_multilingual_noprefix2.json) - **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/) - **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/feplc450) (checkpoint: 1226 steps) - **License:** Apache 2.0 - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord) ## Prompting Two special tokens are used to mark the beginning of user and assistant turns: `<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token. Input prompt example: ``` <|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> ``` The input ends with the `<|assistant|>` token to signal that the model should start generating the assistant reply. ## Configuration Details Model: ``` falcon-40b: dtype: bf16 learning_rate: 1e-5 model_name: "tiiuae/falcon-40b" deepspeed_config: configs/zero3_config_falcon.json weight_decay: 0.0 max_length: 2048 warmup_steps: 20 gradient_checkpointing: true gradient_accumulation_steps: 1 per_device_train_batch_size: 18 per_device_eval_batch_size: 10 eval_steps: 120 save_strategy: steps save_steps: 613 num_train_epochs: 8 save_total_limit: 4 use_flash_attention: false residual_dropout: 0.3 residual_dropout_lima: true ``` Dataset: ``` sft9-stage2: # oasst_export: 100.00% (29899) # vicuna: 50.00% (16963) # code_alpaca: 50.00% (9510) # oa_wiki_qa_bart_10000row: 100.00% (9434) # grade_school_math_instructions: 100.00% (8351) # dolly15k: 100.00% (14250) use_custom_sampler: true datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0 input_file_path: 2023-06-02_oasst_all_labels.jsonl.gz val_split: 0.05 top_k: 2 - vicuna: fraction: 0.5 val_split: 0.025 max_val_set: 250 - code_alpaca: fraction: 0.5 val_split: 0.05 max_val_set: 250 - oa_wiki_qa_bart_10000row: val_split: 0.05 max_val_set: 250 - grade_school_math_instructions: val_split: 0.05 - dolly15k: val_split: 0.05 max_val_set: 300 ```
XSarchitectural/XSarchitecturalV3Commercialbuildingrendering
XSarchitectural
2023-07-07T08:42:55Z
54
2
diffusers
[ "diffusers", "architecture", "architectural", "design", "stable-diffusion", "text-to-image", "en", "license:other", "region:us" ]
text-to-image
2023-07-07T08:17:12Z
--- license: other language: - en library_name: diffusers pipeline_tag: text-to-image tags: - architecture - architectural - design - stable-diffusion ---
Abzu/mpt-30b-q8
Abzu
2023-07-07T08:41:54Z
21
3
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:allenai/c4", "dataset:mc4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack-dedup", "dataset:allenai/s2orc", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "8-bit", "region:us" ]
text-generation
2023-07-07T08:35:33Z
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - allenai/c4 - mc4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack-dedup - allenai/s2orc inference: false --- # MPT-30B MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision. This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-30B is: * **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)). * **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409). * **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-30B: The following models are finetuned on MPT-30B: * [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following. Built by finetuning MPT-30B on several carefully curated datasets. * License: _CC-BY-SA-3.0_ * [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai), [GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets. * License: _CC-By-NC-SA-4.0_ * [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat) ## Model Date June 22, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-30b', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-30b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-30b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 29.95B | |n_layers | 48 | | n_heads | 64 | | d_model | 7168 | | vocab size | 50432 | | sequence length | 8192 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for 1T tokens on the following data mix: | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 | | c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 | | The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 | | RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 | | Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 | | RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 | | RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)). ### Training Configuration The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform): (i) First it was trained on 440 A100-40GBs with a batch size of 1760. (ii) Then, on 216 A100-40GBs with a batch size of 1728. (iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens. The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-30B (Base) is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-30B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: Raising the bar for open-source foundation models}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-06-22}, urldate = {2023-06-22} } ```
at2507/distilbert-base-uncased-finetuned-imdb
at2507
2023-07-07T08:30:36Z
121
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-07T06:28:27Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7003 | 1.0 | 157 | 2.4900 | | 2.5794 | 2.0 | 314 | 2.4228 | | 2.5268 | 3.0 | 471 | 2.4355 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
soduhh/mt5-small-finetuned-amazon-en-fr
soduhh
2023-07-07T08:30:20Z
5
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-07T07:02:53Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: soduhh/mt5-small-finetuned-amazon-en-fr results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # soduhh/mt5-small-finetuned-amazon-en-fr This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.9132 - Validation Loss: 3.2661 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 11184, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.1676 | 4.1323 | 0 | | 5.6798 | 3.6659 | 1 | | 4.9731 | 3.5322 | 2 | | 4.5665 | 3.4177 | 3 | | 4.2967 | 3.3513 | 4 | | 4.1126 | 3.3000 | 5 | | 3.9828 | 3.2671 | 6 | | 3.9132 | 3.2661 | 7 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
irfan62622/Reinforce-pixelcopter
irfan62622
2023-07-07T08:26:21Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-07T08:25:13Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 15.10 +/- 15.86 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
tyavika/LR1E5_BS32_Distilbert-QA-Pytorch-FULL
tyavika
2023-07-07T08:21:29Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-07T05:05:25Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: LR1E5_BS32_Distilbert-QA-Pytorch-FULL results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LR1E5_BS32_Distilbert-QA-Pytorch-FULL This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2043 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6135 | 1.0 | 1645 | 1.3826 | | 1.2998 | 2.0 | 3290 | 1.2342 | | 1.11 | 3.0 | 4935 | 1.1911 | | 0.9527 | 4.0 | 6580 | 1.1765 | | 0.8626 | 5.0 | 8225 | 1.1848 | | 0.7854 | 6.0 | 9870 | 1.2043 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
aroot/eng-guj-simcse_longestplus_usrl
aroot
2023-07-07T08:15:14Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T07:53:43Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longestplus_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longestplus_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2755 - Bleu: 2.8744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
vibhav18/InsuranceMicroLLM
vibhav18
2023-07-07T08:14:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-07T08:10:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
DXD-FYP/Covid-19
DXD-FYP
2023-07-07T08:11:35Z
0
0
fastai
[ "fastai", "image-classification", "region:us" ]
image-classification
2023-07-07T07:38:02Z
--- pipeline_tag: image-classification library_name: fastai ---
aroot/eng-guj-simcse_longest_ssrl
aroot
2023-07-07T08:07:40Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T07:45:56Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-guj-simcse_longest_ssrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-guj-simcse_longest_ssrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2275 - Bleu: 2.8324 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Redamancy2299/dreambooth2
Redamancy2299
2023-07-07T07:59:44Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:stabilityai/stable-diffusion-2-1", "base_model:finetune:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-20T08:23:40Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 instance_prompt: A photo of a young people sleeping in front of a computer tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Redamancy2299/dreambooth2 This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on A photo of a young people sleeping in front of a computer using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
KJan05/ppo-SnowballTarget
KJan05
2023-07-07T07:59:35Z
14
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-07-06T10:37:37Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KJan05/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
SATOU0ZHU/anythingv5-Prt-RE
SATOU0ZHU
2023-07-07T07:46:43Z
31
1
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-07T04:57:51Z
diffusers version of anything v5
kmariunas/2023-07-05-cased
kmariunas
2023-07-07T07:44:43Z
103
0
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-07T06:47:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 108 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 40, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 429.20000000000005, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
shaunkyn/sd_webui_LoRa
shaunkyn
2023-07-07T07:43:46Z
0
1
null
[ "license:unknown", "region:us" ]
null
2023-05-25T05:43:09Z
--- license: unknown --- Source: https://civitai.com/models/18095/chinese-bmale-likeness https://civitai.com/models/44922/oc-illustration https://civitai.com/models/47859?modelVersionId=64536 https://civitai.com/models/43132/oppa Trigger Words:OPPAV3 https://civitai.com/models/18224/cryptopunks Trigger Words:art by punks_sd American Comic Style LoRa: https://civitai.com/models/22912/bored-ape-yacht-club-lora https://civitai.com/models/54127/sbahj-comics-homestuck https://civitai.com/models/41417/steamed-diffusion https://civitai.com/models/17361/peanuts-comics-art-style https://civitai.com/models/20606/modern-american-comics-style-1
nolanaatama/shrkmfbkhllv1stgnrvcv2300pchsyy5
nolanaatama
2023-07-07T07:43:31Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T07:40:09Z
--- license: creativeml-openrail-m ---
aroot/eng-fra-simcse_longestplus_usrl
aroot
2023-07-07T07:40:11Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-07T07:21:28Z
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: eng-fra-simcse_longestplus_usrl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-fra-simcse_longestplus_usrl This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1314 - Bleu: 32.5256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
KJan05/Pyramids-Training-v1
KJan05
2023-07-07T07:32:21Z
10
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-07T07:32:15Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KJan05/Pyramids-Training-v1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Propofol/0707_2_finetuned-finetuned-localization
Propofol
2023-07-07T07:23:46Z
103
0
transformers
[ "transformers", "pytorch", "esm", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-07T05:36:20Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: 0707_2_finetuned-finetuned-localization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0707_2_finetuned-finetuned-localization This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.1445 - Accuracy: 0.4167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.9296 | 1.0 | 2500 | 1.2921 | 0.4267 | | 0.6704 | 2.0 | 5000 | 1.6807 | 0.432 | | 0.3695 | 3.0 | 7500 | 2.3376 | 0.4187 | | 0.1416 | 4.0 | 10000 | 3.6342 | 0.424 | | 0.031 | 5.0 | 12500 | 4.1445 | 0.4167 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
dvinagre/wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-gtzan
dvinagre
2023-07-07T07:21:12Z
33
1
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-06-26T09:22:46Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-gtzan This model is a fine-tuned version of [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7145 - Accuracy: 0.88 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9771 | 1.0 | 225 | 1.7112 | 0.48 | | 1.0169 | 2.0 | 450 | 1.1513 | 0.62 | | 0.7104 | 3.0 | 675 | 0.8799 | 0.7 | | 1.5425 | 4.0 | 900 | 0.7419 | 0.8 | | 0.2908 | 5.0 | 1125 | 0.6713 | 0.8 | | 0.8275 | 6.0 | 1350 | 0.6961 | 0.84 | | 0.0298 | 7.0 | 1575 | 0.8689 | 0.82 | | 0.0163 | 8.0 | 1800 | 0.7662 | 0.86 | | 0.0162 | 9.0 | 2025 | 0.7143 | 0.88 | | 0.2649 | 10.0 | 2250 | 0.7145 | 0.88 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ssdxc/lora_ckpt
ssdxc
2023-07-07T07:05:14Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-06T15:09:25Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - ssdxc/lora_ckpt These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
Bugsys0302/POVBGV2
Bugsys0302
2023-07-07T07:03:04Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T06:59:06Z
--- license: creativeml-openrail-m ---