modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-13 18:27:38
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
518 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-13 18:27:10
card
stringlengths
11
1.01M
shahruk/hocammm
shahruk
2023-10-04T17:50:21Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-10-04T17:50:21Z
--- license: bigscience-openrail-m ---
gokuls/HBERTv1_48_L2_H512_A8
gokuls
2023-10-04T17:45:39Z
46
0
transformers
[ "transformers", "pytorch", "hybridbert", "fill-mask", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-02T10:33:58Z
--- tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_complete_processed_bert_dataset metrics: - accuracy model-index: - name: HBERTv1_48_L2_H512_A8 results: - task: name: Masked Language Modeling type: fill-mask dataset: name: gokuls/wiki_book_corpus_complete_processed_bert_dataset type: gokuls/wiki_book_corpus_complete_processed_bert_dataset metrics: - name: Accuracy type: accuracy value: 0.45301927514806384 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L2_H512_A8 This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset. It achieves the following results on the evaluation set: - Loss: 3.0911 - Accuracy: 0.4530 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 124 - eval_batch_size: 124 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.13.3
Ben141/LLM2
Ben141
2023-10-04T17:43:01Z
0
0
null
[ "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-10-04T17:14:58Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: LLM2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LLM2 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 120 ### Training results ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
calcifer2023/distilbert-base-uncased-finetuned-sentiment
calcifer2023
2023-10-04T17:42:40Z
107
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-04T17:42:00Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1374 - Accuracy: 0.57 - F1: 0.4139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.3727 | 1.0 | 32 | 1.2077 | 0.57 | 0.4139 | | 1.0734 | 2.0 | 64 | 1.1374 | 0.57 | 0.4139 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Tokenizers 0.14.0
gokuls/HBERTv1_48_L2_H128_A2
gokuls
2023-10-04T17:41:26Z
45
0
transformers
[ "transformers", "pytorch", "hybridbert", "fill-mask", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-02T10:33:42Z
--- tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_complete_processed_bert_dataset metrics: - accuracy model-index: - name: HBERTv1_48_L2_H128_A2 results: - task: name: Masked Language Modeling type: fill-mask dataset: name: gokuls/wiki_book_corpus_complete_processed_bert_dataset type: gokuls/wiki_book_corpus_complete_processed_bert_dataset metrics: - name: Accuracy type: accuracy value: 0.14999065459120106 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L2_H128_A2 This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset. It achieves the following results on the evaluation set: - Loss: 6.0202 - Accuracy: 0.1500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 162 - eval_batch_size: 162 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.13.3
gokuls/HBERTv1_48_L2_H64_A2
gokuls
2023-10-04T17:40:51Z
46
0
transformers
[ "transformers", "pytorch", "hybridbert", "fill-mask", "generated_from_trainer", "dataset:gokuls/wiki_book_corpus_complete_processed_bert_dataset", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-02T10:33:58Z
--- tags: - generated_from_trainer datasets: - gokuls/wiki_book_corpus_complete_processed_bert_dataset metrics: - accuracy model-index: - name: HBERTv1_48_L2_H64_A2 results: - task: name: Masked Language Modeling type: fill-mask dataset: name: gokuls/wiki_book_corpus_complete_processed_bert_dataset type: gokuls/wiki_book_corpus_complete_processed_bert_dataset metrics: - name: Accuracy type: accuracy value: 0.14704149331934327 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HBERTv1_48_L2_H64_A2 This model is a fine-tuned version of [](https://huggingface.co/) on the gokuls/wiki_book_corpus_complete_processed_bert_dataset dataset. It achieves the following results on the evaluation set: - Loss: 6.1425 - Accuracy: 0.1470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 180 - eval_batch_size: 180 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10000 - num_epochs: 100 ### Training results ### Framework versions - Transformers 4.33.3 - Pytorch 1.14.0a0+410ce96 - Datasets 2.14.5 - Tokenizers 0.13.3
crangana/trained-race
crangana
2023-10-04T17:39:16Z
195
1
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "generated_from_trainer", "dataset:fair_face", "base_model:microsoft/resnet-50", "base_model:finetune:microsoft/resnet-50", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-04T16:14:27Z
--- license: apache-2.0 base_model: microsoft/resnet-50 tags: - generated_from_trainer datasets: - fair_face metrics: - accuracy model-index: - name: trained-race results: - task: name: Image Classification type: image-classification dataset: name: fair_face type: fair_face config: '0.25' split: validation args: '0.25' metrics: - name: Accuracy type: accuracy value: 0.625798794960745 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained-race This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the fair_face dataset. It achieves the following results on the evaluation set: - Loss: 0.9830 - Accuracy: 0.6258 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3923 | 0.18 | 1000 | 1.3550 | 0.4712 | | 1.1517 | 0.37 | 2000 | 1.1854 | 0.5429 | | 1.2405 | 0.55 | 3000 | 1.1001 | 0.5754 | | 1.0752 | 0.74 | 4000 | 1.0330 | 0.6018 | | 1.0986 | 0.92 | 5000 | 0.9973 | 0.6173 | | 1.0007 | 1.11 | 6000 | 0.9735 | 0.6279 | | 0.9851 | 1.29 | 7000 | 0.9830 | 0.6258 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
jamesmac/ppo-LunarLander-v2
jamesmac
2023-10-04T17:36:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T17:36:12Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 212.85 +/- 58.38 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
alexrodpas/Extr-QA-DistilBERT
alexrodpas
2023-10-04T17:28:54Z
118
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "en", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-09-29T16:53:11Z
--- license: apache-2.0 datasets: - squad language: - en library_name: transformers pipeline_tag: question-answering ---
codyreading/dreambooth-bear-lawn
codyreading
2023-10-04T17:24:48Z
29
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-04T17:18:06Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: A photo of sks stuffed animal tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - codyreading/dreambooth-bear-lawn This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on A photo of sks stuffed animal using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
SuryaKrishna02/open-llama-3b-linear-algebra
SuryaKrishna02
2023-10-04T17:22:06Z
6
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T17:02:46Z
--- license: apache-2.0 language: - en library_name: transformers ---
longface/LR-model
longface
2023-10-04T17:15:26Z
1
0
peft
[ "peft", "region:us" ]
null
2023-10-04T17:15:03Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
librarian-bots/is-new-dataset-from-abstract
librarian-bots
2023-10-04T17:15:17Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-10-04T17:05:52Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/gf/nk18mwt53sb4d0zpvjzs40bw0000gn/T/tmp4825ne9u/librarian-bots/is-new-dataset-from-abstract") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
abhikatta/finetuning-sentiment-model-3000-samples
abhikatta
2023-10-04T17:00:50Z
106
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-04T16:55:30Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8833333333333333 - name: F1 type: f1 value: 0.887459807073955 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4687 - Accuracy: 0.8833 - F1: 0.8875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
emonty777/bart-base-cnndm
emonty777
2023-10-04T16:57:56Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-04T00:20:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cnn_dailymail metrics: - rouge model-index: - name: bart-base-cnndm results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: test args: 3.0.0 metrics: - name: Rouge1 type: rouge value: 25.0336 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-cnndm This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.5802 - Rouge1: 25.0336 - Rouge2: 12.5344 - Rougel: 20.8721 - Rougelsum: 23.5806 - Gen Len: 19.9998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.845 | 1.0 | 8972 | 1.6461 | 24.8325 | 12.327 | 20.6952 | 23.3653 | 19.9998 | | 1.7427 | 2.0 | 17945 | 1.6098 | 24.9118 | 12.4577 | 20.786 | 23.4624 | 19.9998 | | 1.6727 | 3.0 | 26917 | 1.5881 | 24.9723 | 12.4738 | 20.8317 | 23.5195 | 19.9994 | | 1.6288 | 4.0 | 35888 | 1.5802 | 25.0336 | 12.5344 | 20.8721 | 23.5806 | 19.9998 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.0.1+cu118 - Datasets 2.9.0 - Tokenizers 0.13.3
minhbui/viettel_v3.2_adapter
minhbui
2023-10-04T16:56:21Z
8
0
transformers
[ "transformers", "llama", "text-generation", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2023-10-04T16:52:01Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: ckpts/llama2-7b-viettel_v3.2_2epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # ckpts/llama2-7b-viettel_v3.2_2epoch This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3727 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - total_eval_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4378 | 0.12 | 200 | 0.4331 | | 0.4266 | 0.24 | 400 | 0.4187 | | 0.4199 | 0.37 | 600 | 0.4086 | | 0.4024 | 0.49 | 800 | 0.4016 | | 0.4003 | 0.61 | 1000 | 0.3966 | | 0.3849 | 0.73 | 1200 | 0.3914 | | 0.3814 | 0.86 | 1400 | 0.3865 | | 0.3825 | 0.98 | 1600 | 0.3831 | | 0.3557 | 1.1 | 1800 | 0.3812 | | 0.3531 | 1.22 | 2000 | 0.3789 | | 0.3444 | 1.35 | 2200 | 0.3771 | | 0.3411 | 1.47 | 2400 | 0.3752 | | 0.35 | 1.59 | 2600 | 0.3738 | | 0.3586 | 1.71 | 2800 | 0.3733 | | 0.349 | 1.84 | 3000 | 0.3728 | | 0.357 | 1.96 | 3200 | 0.3727 | ### Framework versions - Transformers 4.34.0.dev0 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.14.0
Charlie911/vicuna-7b-v1.5-lora-mixed-datasets-time-unit
Charlie911
2023-10-04T16:50:32Z
6
0
peft
[ "peft", "safetensors", "llama", "region:us" ]
null
2023-10-04T16:45:40Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
YuZhong-Chen/ppo-LunarLander-v2
YuZhong-Chen
2023-10-04T16:43:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T16:43:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 251.20 +/- 24.18 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
TheBloke/Mistralic-7B-1-GPTQ
TheBloke
2023-10-04T16:27:56Z
13
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "base_model:SkunkworksAI/Mistralic-7B-1", "base_model:quantized:SkunkworksAI/Mistralic-7B-1", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-04T13:33:30Z
--- base_model: SkunkworksAI/Mistralic-7B-1 inference: false model_creator: SkunkworksAI model_name: Mistralic 7B-1 model_type: mistral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### System: {system_message} ### Instruction: {prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistralic 7B-1 - GPTQ - Model creator: [SkunkworksAI](https://huggingface.co/SkunkworksAI) - Original model: [Mistralic 7B-1](https://huggingface.co/SkunkworksAI/Mistralic-7B-1) <!-- description start --> ## Description This repo contains GPTQ model files for [SkunkworksAI's Mistralic 7B-1](https://huggingface.co/SkunkworksAI/Mistralic-7B-1). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistralic-7B-1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistralic-7B-1-GGUF) * [SkunkworksAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SkunkworksAI/Mistralic-7B-1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistralic ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### System: {system_message} ### Instruction: {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Mistralic-7B-1-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Mistralic-7B-1-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Mistralic-7B-1-GPTQ`: ```shell mkdir Mistralic-7B-1-GPTQ huggingface-cli download TheBloke/Mistralic-7B-1-GPTQ --local-dir Mistralic-7B-1-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Mistralic-7B-1-GPTQ huggingface-cli download TheBloke/Mistralic-7B-1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Mistralic-7B-1-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Mistralic-7B-1-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistralic-7B-1-GPTQ --local-dir Mistralic-7B-1-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Mistralic-7B-1-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Mistralic-7B-1-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Mistralic-7B-1-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Mistralic-7B-1-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Mistralic-7B-1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### System: {system_message} ### Instruction: {prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Mistralic-7B-1-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### System: {system_message} ### Instruction: {prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: SkunkworksAI's Mistralic 7B-1 <p><h1> 🦾 Mistralic-7B-1 🦾 </h1></p> Special thanks to Together Compute for sponsoring Skunkworks with compute! **INFERENCE** ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer torch.set_default_device('cuda') system_prompt = "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n" system_no_input_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" def generate_prompt(instruction, input=None): if input: prompt = f"### System:\n{system_prompt}\n\n" else: prompt = f"### System:\n{system_no_input_prompt}\n\n" prompt += f"### Instruction:\n{instruction}\n\n" if input: prompt += f"### Input:\n{input}\n\n" return prompt + """### Response:\n""" device = "cuda" model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/Mistralic-7B-1") tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/Mistralic-7B-1") while True: instruction = input("Enter Instruction: ") instruction = generate_prompt(instruction) inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=1000, do_sample=True, temperature=0.01, use_cache=True, eos_token_id=tokenizer.eos_token_id) text = tokenizer.batch_decode(outputs)[0] print(text) ``` **EVALUATION** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7e345f92b20f7a38bf47a/ycpNhdGZHGbai_wslT2Bg.png) Average: 0.72157 For comparison: mistralai/Mistral-7B-v0.1 scores 0.7116 mistralai/Mistral-7B-Instruct-v0.1 scores 0.6794
sebastiantrbl/DialoGPT-daily-dialog-txt
sebastiantrbl
2023-10-04T16:16:38Z
221
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "conversational", "base_model:microsoft/DialoGPT-medium", "base_model:finetune:microsoft/DialoGPT-medium", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-02T15:10:34Z
--- license: mit base_model: microsoft/DialoGPT-medium tags: - generated_from_trainer model-index: - name: DialoGPT-daily-dialog-txt results: [] pipeline_tag: conversational --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DialoGPT-daily-dialog-txt This model is a fine-tuned version of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
alexisdpc/t5-small-finetuned-xsum
alexisdpc
2023-10-04T16:03:36Z
103
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-28T12:54:00Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer datasets: - xsum model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
Thunder-rk/Minebot
Thunder-rk
2023-10-04T16:00:58Z
0
0
null
[ "hi", "en", "ta", "ml", "kn", "dataset:Open-Orca/OpenOrca", "region:us" ]
null
2023-10-04T15:58:47Z
--- datasets: - Open-Orca/OpenOrca language: - hi - en - ta - ml - kn ---
Jayem-11/swahili_summary
Jayem-11
2023-10-04T15:54:08Z
61
0
transformers
[ "transformers", "tf", "t5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-02T11:31:54Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: swahili_summary results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # swahili_summary This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0394 - Validation Loss: 1.8347 - Train Rougel: tf.Tensor(0.08596196, shape=(), dtype=float32) - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rougel | Epoch | |:----------:|:---------------:|:-----------------------------------------------:|:-----:| | 2.4004 | 1.9627 | tf.Tensor(0.052536312, shape=(), dtype=float32) | 0 | | 2.0394 | 1.8347 | tf.Tensor(0.08596196, shape=(), dtype=float32) | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.1.0 - Tokenizers 0.13.3
ayshi/distil_base2
ayshi
2023-10-04T15:54:06Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-04T15:34:52Z
--- license: apache-2.0 base_model: distilbert-base-multilingual-cased tags: - generated_from_keras_callback model-index: - name: ayshi/distil_base2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ayshi/distil_base2 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9433 - Validation Loss: 0.9417 - Train Accuracy: 0.7156 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 320, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.3779 | 1.1529 | 0.6667 | 0 | | 1.1276 | 1.0669 | 0.6667 | 1 | | 0.9433 | 0.9417 | 0.7156 | 2 | ### Framework versions - Transformers 4.34.0 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.14.0
ldos/text_shortening_model_v65
ldos
2023-10-04T15:47:10Z
104
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-04T15:32:20Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: text_shortening_model_v65 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_shortening_model_v65 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1783 - Bert precision: 0.8964 - Bert recall: 0.8977 - Bert f1-score: 0.8966 - Average word count: 6.4565 - Max word count: 16 - Min word count: 2 - Average token count: 10.5686 - % shortened texts with length > 12: 2.002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bert precision | Bert recall | Bert f1-score | Average word count | Max word count | Min word count | Average token count | % shortened texts with length > 12 | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:-----------:|:-------------:|:------------------:|:--------------:|:--------------:|:-------------------:|:----------------------------------:| | 1.7747 | 1.0 | 146 | 1.3200 | 0.8806 | 0.8825 | 0.881 | 6.7818 | 18 | 2 | 10.6827 | 2.1021 | | 1.3684 | 2.0 | 292 | 1.2106 | 0.8857 | 0.8858 | 0.8852 | 6.5335 | 18 | 2 | 10.4835 | 1.7017 | | 1.2448 | 3.0 | 438 | 1.1635 | 0.8862 | 0.8883 | 0.8868 | 6.6246 | 18 | 1 | 10.6817 | 2.1021 | | 1.1406 | 4.0 | 584 | 1.1386 | 0.8897 | 0.8923 | 0.8905 | 6.6697 | 18 | 2 | 10.6767 | 2.2022 | | 1.0623 | 5.0 | 730 | 1.1373 | 0.889 | 0.893 | 0.8905 | 6.6897 | 18 | 2 | 10.7568 | 1.5015 | | 1.0034 | 6.0 | 876 | 1.1111 | 0.8923 | 0.8953 | 0.8933 | 6.5876 | 18 | 2 | 10.6927 | 1.7017 | | 0.9391 | 7.0 | 1022 | 1.1037 | 0.8927 | 0.8947 | 0.8932 | 6.5455 | 18 | 2 | 10.6196 | 1.3013 | | 0.8868 | 8.0 | 1168 | 1.0997 | 0.8949 | 0.8959 | 0.895 | 6.4805 | 18 | 2 | 10.5836 | 1.4014 | | 0.8443 | 9.0 | 1314 | 1.1011 | 0.8939 | 0.8965 | 0.8947 | 6.5626 | 18 | 2 | 10.6386 | 1.5015 | | 0.8117 | 10.0 | 1460 | 1.0997 | 0.8957 | 0.8981 | 0.8965 | 6.4865 | 16 | 2 | 10.6066 | 1.001 | | 0.7844 | 11.0 | 1606 | 1.1153 | 0.8976 | 0.8979 | 0.8973 | 6.4404 | 18 | 2 | 10.5345 | 1.5015 | | 0.7593 | 12.0 | 1752 | 1.1126 | 0.8946 | 0.8988 | 0.8962 | 6.6356 | 18 | 2 | 10.7698 | 1.9019 | | 0.7249 | 13.0 | 1898 | 1.1047 | 0.8968 | 0.8991 | 0.8975 | 6.5335 | 16 | 2 | 10.6396 | 1.4014 | | 0.7048 | 14.0 | 2044 | 1.1127 | 0.8961 | 0.8984 | 0.8968 | 6.5275 | 16 | 2 | 10.6336 | 1.4014 | | 0.6828 | 15.0 | 2190 | 1.1237 | 0.8965 | 0.8982 | 0.8969 | 6.4675 | 16 | 2 | 10.5906 | 1.7017 | | 0.6558 | 16.0 | 2336 | 1.1221 | 0.8975 | 0.8972 | 0.8969 | 6.3634 | 16 | 1 | 10.4985 | 1.2012 | | 0.6296 | 17.0 | 2482 | 1.1296 | 0.8962 | 0.8982 | 0.8968 | 6.4775 | 16 | 1 | 10.6496 | 1.9019 | | 0.6304 | 18.0 | 2628 | 1.1334 | 0.8981 | 0.898 | 0.8976 | 6.3724 | 16 | 1 | 10.4755 | 1.6016 | | 0.6124 | 19.0 | 2774 | 1.1463 | 0.898 | 0.9006 | 0.8989 | 6.5075 | 15 | 2 | 10.6246 | 1.5015 | | 0.6001 | 20.0 | 2920 | 1.1547 | 0.8982 | 0.8997 | 0.8984 | 6.4925 | 16 | 2 | 10.5766 | 1.9019 | | 0.5834 | 21.0 | 3066 | 1.1551 | 0.8972 | 0.8973 | 0.8967 | 6.3323 | 16 | 2 | 10.4705 | 1.7017 | | 0.5707 | 22.0 | 3212 | 1.1687 | 0.897 | 0.899 | 0.8976 | 6.4665 | 16 | 2 | 10.6026 | 1.7017 | | 0.5667 | 23.0 | 3358 | 1.1656 | 0.8965 | 0.8981 | 0.8968 | 6.4585 | 16 | 2 | 10.5726 | 2.002 | | 0.5519 | 24.0 | 3504 | 1.1747 | 0.8968 | 0.8984 | 0.8971 | 6.4885 | 16 | 2 | 10.5616 | 2.1021 | | 0.5538 | 25.0 | 3650 | 1.1754 | 0.8967 | 0.8983 | 0.897 | 6.4735 | 16 | 2 | 10.5676 | 2.002 | | 0.5403 | 26.0 | 3796 | 1.1734 | 0.8968 | 0.8983 | 0.8971 | 6.4835 | 16 | 2 | 10.6036 | 1.9019 | | 0.5371 | 27.0 | 3942 | 1.1735 | 0.8964 | 0.8982 | 0.8968 | 6.4865 | 16 | 2 | 10.5696 | 2.1021 | | 0.5381 | 28.0 | 4088 | 1.1767 | 0.8968 | 0.8982 | 0.897 | 6.4735 | 16 | 2 | 10.5926 | 1.9019 | | 0.5278 | 29.0 | 4234 | 1.1771 | 0.8966 | 0.8975 | 0.8966 | 6.4454 | 16 | 2 | 10.5556 | 2.002 | | 0.5249 | 30.0 | 4380 | 1.1783 | 0.8964 | 0.8977 | 0.8966 | 6.4565 | 16 | 2 | 10.5686 | 2.002 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
usvsnsp/pythia-70m-ppo
usvsnsp
2023-10-04T15:45:21Z
165
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-04T13:53:13Z
Wandb Run: https://wandb.ai/eleutherai/pythia-rlhf/runs/gy2g8jj1 Model Evals: | Tasks |Version|Filter| Metric |Value | |Stderr| |--------------|-------|------|----------|-----:|---|-----:| |arc_challenge |Yaml |none |acc |0.2253|± |0.0122| | | |none |acc_norm |0.2278|± |0.0123| |arc_easy |Yaml |none |acc |0.2551|± |0.0089| | | |none |acc_norm |0.2567|± |0.0090| |lambada_openai|Yaml |none |perplexity| NaN|± | NaN| | | |none |acc |0.0016|± |0.0005| |logiqa |Yaml |none |acc |0.2028|± |0.0158| | | |none |acc_norm |0.2028|± |0.0158| |piqa |Yaml |none |acc |0.4946|± |0.0117| | | |none |acc_norm |0.4924|± |0.0117| |sciq |Yaml |none |acc |0.0140|± |0.0037| | | |none |acc_norm |0.0140|± |0.0037| |winogrande |Yaml |none |acc |0.5036|± |0.0141| |wsc |Yaml |none |acc |0.6346|± |0.0474|
Quacktab/rl_course_vizdoom_health_gathering_supreme
Quacktab
2023-10-04T15:40:44Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T15:35:45Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 19.31 +/- 3.40 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Quacktab/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
merve/blip2-opt-6.7b
merve
2023-10-04T15:39:57Z
9
1
transformers
[ "transformers", "pytorch", "blip-2", "visual-question-answering", "vision", "image-to-text", "image-captioning", "en", "arxiv:2301.12597", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2023-10-04T07:52:34Z
--- language: en license: mit tags: - vision - image-to-text - image-captioning - visual-question-answering pipeline_tag: image-to-text inference: false --- # BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
chicks002/mistral-finetuned-samsum
chicks002
2023-10-04T15:37:36Z
0
0
null
[ "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "license:apache-2.0", "region:us" ]
null
2023-10-04T14:53:26Z
--- license: apache-2.0 base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ tags: - generated_from_trainer model-index: - name: mistral-finetuned-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-finetuned-samsum This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 ### Training results ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
nailashfrni/rice-classification
nailashfrni
2023-10-04T15:26:51Z
159
0
transformers
[ "transformers", "pytorch", "vit", "dataset:rice", "model-index", "endpoints_compatible", "region:us" ]
null
2023-10-03T16:44:23Z
--- datasets: - rice metrics: - accuracy model-index: - name: rice_classification results: - task: name: Image Classification type: image-classification dataset: name: rice type: rice config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9768 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a CNN model on the rice dataset to classify rice into 5 classes (Arborio, Basmati, Ipsala, Jasmine and Karacadag). It achieves the following results on the evaluation set: - Loss: 0.0116 - Accuracy: 0.9768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - optimizer: Adam - num_epochs: 5 ### Training results | Epoch | Loss | Accuracy | |:-----:|:------:|:--------:| | 1.0 | 0.0510 | 0.9363 | | 2.0 | 0.0099 | 0.9695 | | 3.0 | 0.5962 | 0.9767 | | 4.0 | 0.4232 | 0.9828 | | 5.0 | 0.0011 | 0.9859 |
Quacktab/LunarLander-custom
Quacktab
2023-10-04T15:24:48Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T15:24:38Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -164.33 +/- 109.50 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'test' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 5000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Quacktab/LunarLander-custom' 'batch_size': 512 'minibatch_size': 128} ```
youssefoud/Genz-70b-AWQ-split
youssefoud
2023-10-04T15:19:57Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "base_model:budecosystem/genz-70b", "base_model:finetune:budecosystem/genz-70b", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-04T14:05:06Z
--- language: - en license: llama2 library_name: transformers model_name: GenZ 70B base_model: budecosystem/genz-70b inference: false model_creator: Bud model_type: llama pipeline_tag: text-generation prompt_template: '### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # GenZ 70B - AWQ - Model creator: [Bud](https://huggingface.co/budecosystem) - Original model: [GenZ 70B](https://huggingface.co/budecosystem/genz-70b) <!-- description start --> ## Description This repo contains AWQ model files for [Bud's GenZ 70B](https://huggingface.co/budecosystem/genz-70b). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Genz-70b-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Genz-70b-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Genz-70b-GGUF) * [Bud's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/budecosystem/genz-70b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant-Newlines ``` ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Genz-70b-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 36.61 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-use-from-vllm start --> ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Genz-70b-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Genz-70b-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-python start --> ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Genz-70b-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=False, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False) prompt = "Tell me about AI" prompt_template=f'''### User: {prompt} ### Assistant: ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Bud's GenZ 70B --- <div align="center"><h1 align="center">~ GenZ ~</h1><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/genz-logo.png" width=150></div> <p align="center"><i>Democratizing access to LLMs for the open-source community.<br>Let's advance AI, together. </i></p> --- ## Introduction 🎉 Welcome to **GenZ**, an advanced Large Language Model (LLM) fine-tuned on the foundation of Meta's open-source Llama V2 70B parameter model. At Bud Ecosystem, we believe in the power of open-source collaboration to drive the advancement of technology at an accelerated pace. Our vision is to democratize access to fine-tuned LLMs, and to that end, we will be releasing a series of models across different parameter counts (7B, 13B, and 70B) and quantizations (32-bit and 4-bit) for the open-source community to use, enhance, and build upon. <p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_compare.png" width="500"></p> The smaller quantization version of our models makes them more accessible, enabling their use even on personal computers. This opens up a world of possibilities for developers, researchers, and enthusiasts to experiment with these models and contribute to the collective advancement of language model technology. GenZ isn't just a powerful text generator—it's a sophisticated AI assistant, capable of understanding and responding to user prompts with high-quality responses. We've taken the robust capabilities of Llama V2 and fine-tuned them to offer a more user-focused experience. Whether you're seeking informative responses or engaging interactions, GenZ is designed to deliver. And this isn't the end. It's just the beginning of a journey towards creating more advanced, more efficient, and more accessible language models. We invite you to join us on this exciting journey. 🚀 --- <h2>Milestone Releases ️🏁</h2> **[21 August 2023]** [_GenZ-70B_](https://huggingface.co/budecosystem/genz-70b) : We're excited to announce the release of our Genz 70BB model. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-70b). **[27 July 2023]** [_GenZ-13B V2 (ggml)_](https://huggingface.co/budecosystem/genz-13b-v2-ggml) : Announcing our GenZ-13B v2 with ggml. This variant of GenZ can run inferencing using only CPU and without the need of GPU. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-ggml). **[27 July 2023]** [_GenZ-13B V2 (4-bit)_](https://huggingface.co/budecosystem/genz-13b-v2-4bit) : Announcing our GenZ-13B v2 with 4-bit quantisation. Enabling inferencing with much lesser GPU memory than the 32-bit variant. Download the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2-4bit). **[26 July 2023]** [_GenZ-13B V2_](https://huggingface.co/budecosystem/genz-13b-v2) : We're excited to announce the release of our Genz 13B v2 model, a step forward with improved evaluation results compared to v1. Experience the advancements by downloading the model from [HuggingFace](https://huggingface.co/budecosystem/genz-13b-v2). **[20 July 2023]** [_GenZ-13B_](https://huggingface.co/budecosystem/genz-13b) : We marked an important milestone with the release of the Genz 13B model. The journey began here, and you can partake in it by downloading the model from [Hugging Face](https://huggingface.co/budecosystem/genz-13b). --- <h2>Evaluations 🎯</h2> Evaluating our model is a key part of our fine-tuning process. It helps us understand how our model is performing and how it stacks up against other models. Here's a look at some of the key evaluations for GenZ 70B: <h3>Benchmark Comparison</h3> We've compared GenZ models to understand the improvements our fine-tuning has achieved. | Model Name | MT Bench | MMLU | Human Eval | BBH | |:----------:|:--------:|:----:|:----------:|:----:| | Genz 13B | 6.12 | 53.62| 17.68 | 37.76| | Genz 13B v2| 6.79 | 53.68| 21.95 | 38.1 | | Genz 70B | 7.33 | 70.32| 37.8 |54.69 | <h3>MT Bench Score</h3> A key evaluation metric we use is the MT Bench score. This score provides a comprehensive assessment of our model's performance across a range of tasks. <p align="center"><img src="https://raw.githubusercontent.com/BudEcosystem/GenZ/main/assets/mt_bench_score.png" width="500"></p> --- <h2>Getting Started on Hugging Face 🤗</h2> Getting up and running with our models on Hugging Face is a breeze. Follow these steps: <h3>1️⃣ : Import necessary modules</h3> Start by importing the necessary modules from the ‘transformers’ library and ‘torch’. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("budecosystem/genz-70b", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("budecosystem/genz-70b", torch_dtype=torch.bfloat16, rope_scaling={"type": "dynamic", "factor": 2}) prompt = "### User:\nWrite a python flask code for login management\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` Want to interact with the model in a more intuitive way? We have a Gradio interface set up for that. Head over to our GitHub page, clone the repository, and run the ‘generate.py’ script to try it out. Happy experimenting! 😄 <h2>Why Use GenZ? 💡</h2> You might be wondering, "Why should I choose GenZ over a pretrained model?" The answer lies in the extra mile we've gone to fine-tune our models. While pretrained models are undeniably powerful, GenZ brings something extra to the table. We've fine-tuned it with curated datasets, which means it has additional skills and capabilities beyond what a pretrained model can offer. Whether you need it for a simple task or a complex project, GenZ is up for the challenge. What's more, we are committed to continuously enhancing GenZ. We believe in the power of constant learning and improvement. That's why we'll be regularly fine-tuning our models with various curated datasets to make them even better. Our goal is to reach the state of the art and beyond - and we're committed to staying the course until we get there. But don't just take our word for it. We've provided detailed evaluations and performance details in a later section, so you can see the difference for yourself. Choose GenZ and join us on this journey. Together, we can push the boundaries of what's possible with large language models. --- <h2>Model Card for GenZ 70B 📄</h2> Here's a quick overview of everything you need to know about GenZ 70B. <h3>Model Details:</h3> - Developed by: Bud Ecosystem - Base pretrained model type: Llama V2 70B - Model Architecture: GenZ 70B, fine-tuned on Llama V2 70B, is an auto-regressive language model that employs an optimized transformer architecture. The fine-tuning process for GenZ 70B leveraged Supervised Fine-Tuning (SFT) - License: The model is available for commercial use under a custom commercial license. For more information, please visit: [Meta AI Model and Library Downloads](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) --- <h2>Intended Use 💼</h2> When we created GenZ 70B, we had a clear vision of how it could be used to push the boundaries of what's possible with large language models. We also understand the importance of using such models responsibly. Here's a brief overview of the intended and out-of-scope uses for GenZ 70B. <h3>Direct Use</h3> GenZ 70B is designed to be a powerful tool for research on large language models. It's also an excellent foundation for further specialization and fine-tuning for specific use cases, such as: - Text summarization - Text generation - Chatbot creation - And much more! <h3>Out-of-Scope Use 🚩</h3> While GenZ 70B is versatile, there are certain uses that are out of scope: - Production use without adequate assessment of risks and mitigation - Any use cases which may be considered irresponsible or harmful - Use in any manner that violates applicable laws or regulations, including trade compliance laws - Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2 Remember, GenZ 70B, like any large language model, is trained on a large-scale corpora representative of the web, and therefore, may carry the stereotypes and biases commonly encountered online. <h3>Recommendations 🧠</h3> We recommend users of GenZ 70B to consider fine-tuning it for the specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 70B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment. --- <h2>Training Details 📚</h2> When fine-tuning GenZ 70B, we took a meticulous approach to ensure we were building on the solid base of the pretrained Llama V2 70B model in the most effective way. Here's a look at the key details of our training process: <h3>Fine-Tuning Training Data</h3> For the fine-tuning process, we used a carefully curated mix of datasets. These included data from OpenAssistant, an instruction fine-tuning dataset, and Thought Source for the Chain Of Thought (CoT) approach. This diverse mix of data sources helped us enhance the model's capabilities across a range of tasks. <h3>Hyperparameters</h3> Here are the hyperparameters we used for fine-tuning: | Hyperparameter | Value | | -------------- | ----- | | Warmup Ratio | 0.04 | | Learning Rate Scheduler Type | Cosine | | Learning Rate | 2e-5 | | Number of Training Epochs | 3 | | Per Device Training Batch Size | 4 | | Gradient Accumulation Steps | 4 | | Precision | FP16 | | Optimizer | AdamW | --- <h2>Looking Ahead 👀</h2> We're excited about the journey ahead with GenZ. We're committed to continuously improving and enhancing our models, and we're excited to see what the open-source community will build with them. We believe in the power of collaboration, and we can't wait to see what we can achieve together. Remember, we're just getting started. This is just the beginning of a journey that we believe will revolutionize the world of large language models. We invite you to join us on this exciting journey. Together, we can push the boundaries of what's possible with AI. 🚀 --- Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
Zahra99/wav2vec2-base-finetuned-iemocap-fin
Zahra99
2023-10-04T15:18:37Z
164
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-10-04T14:42:40Z
--- license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: wav2vec2-base-finetuned-iemocap-fin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-iemocap-fin This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1760 - Accuracy: 0.5839 - F1: 0.5773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.2283 | 1.0 | 102 | 1.2181 | 0.4840 | 0.4756 | | 1.124 | 2.0 | 204 | 1.1143 | 0.5015 | 0.4808 | | 1.062 | 3.0 | 306 | 1.1103 | 0.5189 | 0.5067 | | 0.9863 | 4.0 | 408 | 1.0813 | 0.5189 | 0.5152 | | 0.9689 | 5.0 | 510 | 1.0689 | 0.5403 | 0.5318 | | 0.8722 | 6.0 | 612 | 1.0976 | 0.5296 | 0.4992 | | 0.8757 | 7.0 | 714 | 1.0409 | 0.5606 | 0.5518 | | 0.8548 | 8.0 | 816 | 1.0479 | 0.5694 | 0.5636 | | 0.838 | 9.0 | 918 | 1.1700 | 0.5422 | 0.5109 | | 0.7536 | 10.0 | 1020 | 1.0743 | 0.5674 | 0.5681 | | 0.6557 | 11.0 | 1122 | 1.1487 | 0.5616 | 0.5495 | | 0.6193 | 12.0 | 1224 | 1.1239 | 0.5849 | 0.5815 | | 0.5742 | 13.0 | 1326 | 1.1793 | 0.5742 | 0.5617 | | 0.5717 | 14.0 | 1428 | 1.1548 | 0.5868 | 0.5809 | | 0.5929 | 15.0 | 1530 | 1.1760 | 0.5839 | 0.5773 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
TheBloke/Llama-2-7B-vietnamese-20k-GPTQ
TheBloke
2023-10-04T15:14:33Z
20
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "llama-2-7B", "llama2-vietnamese", "vietnamese", "base_model:ngoan/Llama-2-7b-vietnamese-20k", "base_model:quantized:ngoan/Llama-2-7b-vietnamese-20k", "license:llama2", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-04T14:57:39Z
--- base_model: ngoantech/Llama-2-7b-vietnamese-20k inference: false license: llama2 model_creator: Pham Van Ngoan model_name: Llama 2 7B Vietnamese 20K model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke tags: - text-generation - llama-2 - llama-2-7B - llama2-vietnamese - vietnamese --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 7B Vietnamese 20K - GPTQ - Model creator: [Pham Van Ngoan](https://huggingface.co/ngoantech) - Original model: [Llama 2 7B Vietnamese 20K](https://huggingface.co/ngoantech/Llama-2-7b-vietnamese-20k) <!-- description start --> ## Description This repo contains GPTQ model files for [Pham Van Ngoan's Llama 2 7B Vietnamese 20K](https://huggingface.co/ngoantech/Llama-2-7b-vietnamese-20k). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GGUF) * [Pham Van Ngoan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ngoantech/Llama-2-7b-vietnamese-20k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Alpaca Vietnamese](https://huggingface.co/datasets/nRuaif/Vietnamese_x_Alpaca) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Alpaca Vietnamese](https://huggingface.co/datasets/nRuaif/Vietnamese_x_Alpaca) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Alpaca Vietnamese](https://huggingface.co/datasets/nRuaif/Vietnamese_x_Alpaca) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Alpaca Vietnamese](https://huggingface.co/datasets/nRuaif/Vietnamese_x_Alpaca) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Alpaca Vietnamese](https://huggingface.co/datasets/nRuaif/Vietnamese_x_Alpaca) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Alpaca Vietnamese](https://huggingface.co/datasets/nRuaif/Vietnamese_x_Alpaca) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Llama-2-7B-vietnamese-20k-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Llama-2-7B-vietnamese-20k-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Llama-2-7B-vietnamese-20k-GPTQ`: ```shell mkdir Llama-2-7B-vietnamese-20k-GPTQ huggingface-cli download TheBloke/Llama-2-7B-vietnamese-20k-GPTQ --local-dir Llama-2-7B-vietnamese-20k-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Llama-2-7B-vietnamese-20k-GPTQ huggingface-cli download TheBloke/Llama-2-7B-vietnamese-20k-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Llama-2-7B-vietnamese-20k-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Llama-2-7B-vietnamese-20k-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7B-vietnamese-20k-GPTQ --local-dir Llama-2-7B-vietnamese-20k-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-7B-vietnamese-20k-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Llama-2-7B-vietnamese-20k-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Llama-2-7B-vietnamese-20k-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Llama-2-7B-vietnamese-20k-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Llama-2-7B-vietnamese-20k-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Llama-2-7B-vietnamese-20k-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Pham Van Ngoan's Llama 2 7B Vietnamese 20K # Model Card for Llama 2 Fine-Tuned on Vietnamese Instructions ## Model Details - Model Name: Llama-2-7b-vietnamese-20k - Architecture: Llama 2 7B - Fine-tuning Data Size: 20,000 instruction samples - Purpose: To demonstrate the performance of the Llama 2 model on Vietnamese and gather initial insights. A more comprehensive model and evaluation will be released soon. - Availability: The model checkpoint can be accessed on Hugging Face: ngoantech/Llama-2-7b-vietnamese-20k ## Intended Use This model is intended for researchers, developers, and enthusiasts who are interested in understanding the performance of the Llama 2 model on Vietnamese. It can be used for generating Vietnamese text based on given instructions or for any other task that requires a Vietnamese language model. ## Example Output ![Example output 1](exp_1.png "Example output 1") ## Limitations - Data Size: The model was fine-tuned on a relatively small dataset of 20,000 instruction samples, which might not capture the full complexity and nuances of the Vietnamese language. - Preliminary Model: This is an initial experiment with the Llama 2 architecture on Vietnamese. More refined versions and evaluations will be available soon. - Performance: Specific performance metrics on this fine-tuned model will be provided in the upcoming comprehensive evaluation. ## Ethical Considerations - Bias and Fairness: Like any other machine learning model, there is a possibility that this model might reproduce or amplify biases present in the training data. - Use in Critical Systems: As this is a preliminary model, it is recommended not to use it for mission-critical applications without proper validation. - Fine-tuning Data: The model was fine-tuned on a custom dataset of 20,000 instruction samples in Vietnamese. More details about the composition and source of this dataset will be provided in the detailed evaluation report. ## Credits I would like to express our gratitude to the creators of the Llama 2 architecture and the Hugging Face community for their tools and resources. ## Contact [email protected] https://github.com/ngoanpv
zipingl/synthethics
zipingl
2023-10-04T15:12:29Z
0
0
null
[ "license:other", "region:us" ]
null
2023-10-04T15:12:29Z
--- license: other license_name: corporation license_link: https://terms.ziping.org ---
goendalf666/salesGPT_v2
goendalf666
2023-10-04T15:10:35Z
63
2
transformers
[ "transformers", "pytorch", "mixformer-sequential", "text-generation", "generated_from_trainer", "sales", "custom_code", "en", "dataset:goendalf666/sales-conversations-2", "dataset:goendalf666/sales-conversations-instruction-ext", "dataset:goendalf666/sales-conversations-instruction-base", "dataset:goendalf666/sales-textbook_for_convincing_and_selling", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-10-02T22:23:34Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer - sales model-index: - name: salesGPT_v2 results: [] datasets: - goendalf666/sales-conversations-2 - goendalf666/sales-conversations-instruction-ext - goendalf666/sales-conversations-instruction-base - goendalf666/sales-textbook_for_convincing_and_selling language: - en pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # salesGPT_v2 **Model Card for salesGPT_v2** ### Model Description salesGPT_v2, derived from microsoft/phi-1_5, is specialized in simulating sales conversations, wherein it understands customer requirements, manages objections, and suggests suitable products or services. It was fine-tuned on a variety of sales-related datasets and seems proficient in initiating conversations, asking pertinent questions, and sustaining interactive dialogues with users. ### Related Ressources Github: https://github.com/tom813/salesGPT_foundation salesGPT_v1: https://huggingface.co/goendalf666/salesGPT_v1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63797fcb2cb50dda39d8aec6/re7MmsaYNzTYVH2jEXDDu.png) ### Intended Uses & Limitations **Intended Uses:** - Simulating sales conversations for training or evaluation purposes. - Providing guidelines or suggested dialogues for sales representatives. **Limitations:** - The model might repetitively ask questions in certain scenarios. - May struggle with handling customers who lack specific preferences or knowledge about products. - The objection handling could be more focused on convincing techniques rather than objective criteria. - Challenges in providing appropriate suggestions for customers without specific needs. - Limited effectiveness in handling financial and budgetary conversations or sensitivities. ### Training and Evaluation Data **Training Data:** 1. **Textbook v1 Dataset** - URL: [Dataset](https://huggingface.co/datasets/goendalf666/sales-textbook_for_convincing_and_selling) - Content: Textbook content for sales, derived from structural points and detailed subpoints created through API calls. 2. **Sales Conversation Dataset** - URL: [Dataset](https://huggingface.co/datasets/goendalf666/sales-conversations) - Content: Sales conversations, generated based on the chapters of the textbook. 3. **Sales Conversations Instruction Base Dataset** - URL: [Dataset](https://huggingface.co/datasets/goendalf666/sales-conversations-instruction-base) - Content: Extended sales conversations with structured dialogues. 4. **Sales Conversations Instruction Extension Dataset** - URL: [Dataset](https://huggingface.co/datasets/goendalf666/sales-conversations-instruction-ext) - Content: Updates based on real conversations with the model to improve its proficiency in unconvincing cases. **Evaluation Data:** - More information is needed regarding how and where the model was evaluated. If it was assessed on a separate test set, providing access and details to that dataset would be crucial. ### Training Procedure Fine-tuning of salesGPT_v2 was executed in three phases using the LoRa approach with Rank 64: 1. Training on a textbook for 20k steps. 2. Training on sales conversations for 40k steps, resulting in salesGPT_v1. 3. Training on sales conversations instruction for 40k steps, evolving into salesGPT_v2. Hyperparameters used during training: - Learning rate: 0.0002 - Train batch size: 2 - Eval batch size: 8 - Seed: 42 - Optimizer: Adam - LR scheduler type: Cosine - Training steps: 40,000 - [Additional parameters...](https://huggingface.co/goendalf666/salesGPT_v2) Model size details: - Trainable params: 18,874,368 - All params: 1,437,145,088 - Trainable%: 1.313323766514519 ### Results While salesGPT_v2 showcases notable capabilities like initiating a structured dialogue and asking relevant questions, there are some key areas of improvement and observations: - The model tends to keep asking questions if a user provides vague information. - In some cases, it's not assertive enough to suggest a product when a customer lacks preferences. - It tends to leverage objective criteria over convincing strategies during objections. - Needs better handling of conversations regarding budgetary or financial concerns. ### Example Usage & Conversations #### Example 1: **Customer**: I am interested in buying a new laptop. **salesGPT_v2**: I understand. Could you please tell me more about your specific requirements and budget? #### Example 2: **Customer**: Oh I don't know how I should explain this to my wife. I said we can't go to vacation because we don't have any money. **salesGPT_v2**: I see. Can you please provide more details about your financial situation and the specific expenses you are concerned about? These examples underline the model's capability to engage in sales dialogues and also hint at its limitations, such as possibly prying too much into a customer's personal financial situation. ### Future Directions & Improvement - Enhancement in handling objections by focusing more on persuasive techniques and emotional intelligence. - Improving suggestion-making capability especially when customers are indecisive. - Better navigation through the conversation that involves budgetary and financial aspects without seeming insensitive or intrusive. - Striking a balance between being informative and being too technical in its product descriptions. - Possible implementation of more ethical and privacy-guided conversation guidelines, especially in discussing customers' financial capacities. ### Ethical Considerations The model’s tendency to repeatedly ask for specific information, especially related to personal financial details, raises ethical concerns regarding privacy and data sensitivity. Care must be taken to ensure the model respects user privacy and does not persistently probe for personal or sensitive information. ### Conclusion salesGPT_v2 offers a foundation for simulating sales conversations with potential for future refinement in handling objections, making product suggestions, and managing conversations delicately around financial discussions. Future versions might seek to refine its balance between being convincingly persuasive and remaining ethically and emotionally intelligent within dialogues. ### Inference ``` from transformers import AutoModelForCausalLM, AutoTokenizer # Initialize the model and tokenizer cuda = "cuda:0" if torch.cuda.is_available() else "" model = AutoModelForCausalLM.from_pretrained("goendalf666/salesGPT_v2", trust_remote_code=True, torch_dtype=torch.float32, device_map={"":0}) tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, device_map={"":0}) inputs = tokenizer(conversation_text, return_tensors="pt", return_attention_mask=False) inputs.to(cuda) # Generate response outputs = model.generate(**inputs, max_length=512) response_text = tokenizer.batch_decode(outputs)[0] ``` Or Inference script: https://github.com/tom813/salesGPT_foundation/blob/main/inference.py ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0.dev20230829+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3
duwi/dqn-SpaceInvadersNoFrameskip-v4
duwi
2023-10-04T15:07:43Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T13:46:51Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 671.00 +/- 164.34 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga duwi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga duwi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga duwi ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
TanmaySah/small
TanmaySah
2023-10-04T14:55:55Z
0
0
peft
[ "peft", "region:us" ]
null
2023-09-28T16:49:05Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0 - PEFT 0.5.0
bolon667/Dr_Kawashima_Brain_Age
bolon667
2023-10-04T14:53:02Z
1
0
transformers
[ "transformers", "rvc", "voice_clonning", "audio-to-audio", "en", "license:mit", "endpoints_compatible", "region:us" ]
audio-to-audio
2023-10-04T14:41:46Z
--- license: mit language: - en tags: - rvc - voice_clonning pipeline_tag: audio-to-audio --- Dr Kawashima voice from Brain Age series. Trained on 100 epochs, 40 KGz
sophiaaaa/distilroberta-base-finetuned-wikitext2
sophiaaaa
2023-10-04T14:37:28Z
210
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-04T14:07:46Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.5 - Tokenizers 0.13.3
Jas23/ppo-Huggy
Jas23
2023-10-04T14:31:32Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-10-04T14:31:13Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Jas23/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
abaheti95/dpo_qlora_hh
abaheti95
2023-10-04T14:29:05Z
0
1
null
[ "arxiv:2305.14718", "arxiv:2305.18290", "region:us" ]
null
2023-10-04T14:08:39Z
## HH-RLHF QLoRA adapters trained with Direct Preference Optimization within the experiments with A-LoL | [Paper](https://arxiv.org/abs/2305.14718) | [Code](https://github.com/abaheti95/LoL-RL) | ### Model description We continued QLoRA finetuning on [HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) starting from [timdettmers/qlora-hh-rlhf-7b](https://huggingface.co/timdettmers/qlora-hh-rlhf-7b) for 1 epoch. In [DPO](https://arxiv.org/abs/2305.18290) we use both chosen and rejected preferred responses to finetune the language model. DPO is considered as the main preference-based offline RL baseline for comparing with Advantage Leftover Lunch methods. ⚠️ These models are purely intended for research purposes and could produce problematic outputs. ### Usage ```python tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b") tokenizer.truncation_side = "left" tokenizer.padding_side = "left" base_model = AutoModelForCausalLM.from_pretrained( "huggyllama/llama-7b", load_in_4bit=True, torch_dtype=torch.float16, device_map="auto", max_memory= {i: '40000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(base_model, "abaheti95/dpo_qlora_hh") prefix = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n### Human: How would I go about making homemade chewing gum? ### Assistant: Well, it\u2019s pretty easy, but you can either make it a bit better or more interesting if you use xanthan gum, which is a commonly used thickener. Xanthan gum is added to food products as a thickener, gelling agent, emulsifier, and stabilizer, which means that it allows sugar syrups to form gels. If you can get hold of some, you could use it to make really interesting gum! ### Human: Okay what do I do when I have the xanthan gum? What else do I need to add? ### Assistant: Well, the important ingredient is the gums. You can find many different types at your local pharmacy, which should be able to help you figure out what you need. You will need: 1) a mixer, 2) sugar, 3) xanthan gum, 4) flavoring, 5) gum base, and 6) a dehydrator, which is a special oven that dries food. ### Human: Why would I need a dehydrator to make gum? ### Assistant:" current_batch_prefixes_inputs = tokenizer([prefix], max_length = 768 - 128,truncation = True,add_special_tokens=True, padding = True, return_tensors="pt").to(model.device) # 835 = "###" predicted_sents = model.generate(**current_batch_prefixes_inputs, max_new_tokens=128,pad_token_id=tokenizer.pad_token_id,num_beams=1,do_sample=False,num_return_sequences = 1, eos_token_id=[835, tokenizer.eos_token_id]) repsonse_tokens = predicted_sents[:, current_batch_prefixes_inputs['input_ids'].shape[-1]:] responses = tokenizer.batch_decode(repsonse_tokens, skip_special_tokens=True) # Normalize responses responses_normalized = [resp.split("\n Human:")[0].split("\nHuman:")[0].split("\n### Human")[0].strip() for resp in responses] responses_normalized = [resp.replace("###", "").strip() if resp.endswith("###") else resp.strip() for resp in responses_normalized] ``` We also show the evaluation results of the model on the test set in file: `harmless_base_eval_results.jsonl`, `helpful_base_eval_results.jsonl`, `helpful_online_eval_results.jsonl` and `helpful_rejection_eval_results.jsonl`. ### Framework version and configuration - PEFT 0.5.0 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16
suno/bark
suno
2023-10-04T14:17:55Z
48,851
1,237
transformers
[ "transformers", "pytorch", "bark", "text-to-audio", "audio", "text-to-speech", "en", "de", "es", "fr", "hi", "it", "ja", "ko", "pl", "pt", "ru", "tr", "zh", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-04-25T14:44:46Z
--- language: - en - de - es - fr - hi - it - ja - ko - pl - pt - ru - tr - zh thumbnail: >- https://user-images.githubusercontent.com/5068315/230698495-cbb1ced9-c911-4c9a-941d-a1a4a1286ac6.png library: bark license: mit tags: - bark - audio - text-to-speech pipeline_tag: text-to-speech inference: true --- # Bark Bark is a transformer-based text-to-audio model created by [Suno](https://www.suno.ai). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints ready for inference. The original github repo and model card can be found [here](https://github.com/suno-ai/bark). This model is meant for research purposes only. The model output is not censored and the authors do not endorse the opinions in the generated content. Use at your own risk. Two checkpoints are released: - [small](https://huggingface.co/suno/bark-small) - [**large** (this checkpoint)](https://huggingface.co/suno/bark) ## Example Try out Bark yourself! * Bark Colab: <a target="_blank" href="https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Colab: <a target="_blank" href="https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Hugging Face Demo: <a target="_blank" href="https://huggingface.co/spaces/suno/bark"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/> </a> ## 🤗 Transformers Usage You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards. 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy: ``` pip install --upgrade pip pip install --upgrade transformers scipy ``` 2. Run inference via the `Text-to-Speech` (TTS) pipeline. You can infer the bark model via the TTS pipeline in just a few lines of code! ```python from transformers import pipeline import scipy synthesiser = pipeline("text-to-speech", "suno/bark") speech = synthesiser("Hello, my dog is cooler than you!", forward_params={"do_sample": True}) scipy.io.wavfile.write("bark_out.wav", rate=speech["sampling_rate"], data=speech["audio"]) ``` 3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 24 kHz speech waveform for more fine-grained control. ```python from transformers import AutoProcessor, AutoModel processor = AutoProcessor.from_pretrained("suno/bark") model = AutoModel.from_pretrained("suno/bark") inputs = processor( text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."], return_tensors="pt", ) speech_values = model.generate(**inputs, do_sample=True) ``` 4. Listen to the speech samples either in an ipynb notebook: ```python from IPython.display import Audio sampling_rate = model.generation_config.sample_rate Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate) ``` Or save them as a `.wav` file using a third-party library, e.g. `scipy`: ```python import scipy sampling_rate = model.config.sample_rate scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze()) ``` For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the [Bark docs](https://huggingface.co/docs/transformers/model_doc/bark). ## Suno Usage You can also run Bark locally through the original [Bark library]((https://github.com/suno-ai/bark): 1. First install the [`bark` library](https://github.com/suno-ai/bark) 2. Run the following Python code: ```python from bark import SAMPLE_RATE, generate_audio, preload_models from IPython.display import Audio # download and load all models preload_models() # generate audio from text text_prompt = """ Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe. """ speech_array = generate_audio(text_prompt) # play text in notebook Audio(speech_array, rate=SAMPLE_RATE) ``` [pizza.webm](https://user-images.githubusercontent.com/5068315/230490503-417e688d-5115-4eee-9550-b46a2b465ee3.webm) To save `audio_array` as a WAV file: ```python from scipy.io.wavfile import write as write_wav write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array) ``` ## Model Details The following is additional information about the models released here. Bark is a series of three transformer models that turn text into audio. ### Text to semantic tokens - Input: text, tokenized with [BERT tokenizer from Hugging Face](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) - Output: semantic tokens that encode the audio to be generated ### Semantic to coarse tokens - Input: semantic tokens - Output: tokens from the first two codebooks of the [EnCodec Codec](https://github.com/facebookresearch/encodec) from facebook ### Coarse to fine tokens - Input: the first two codebooks from EnCodec - Output: 8 codebooks from EnCodec ### Architecture | Model | Parameters | Attention | Output Vocab size | |:-------------------------:|:----------:|------------|:-----------------:| | Text to semantic tokens | 80/300 M | Causal | 10,000 | | Semantic to coarse tokens | 80/300 M | Causal | 2x 1,024 | | Coarse to fine tokens | 80/300 M | Non-causal | 6x 1,024 | ### Release date April 2023 ## Broader Implications We anticipate that this model's text to audio capabilities can be used to improve accessbility tools in a variety of languages. While we hope that this release will enable users to express their creativity and build applications that are a force for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward to voice clone known people with Bark, it can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark, we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
Ibrahim-Alam/finetuning-roberta-base-on-sst2_1epoch
Ibrahim-Alam
2023-10-04T14:05:39Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:sst2", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-10T19:48:58Z
--- license: mit tags: - generated_from_trainer datasets: - sst2 metrics: - accuracy - f1 model-index: - name: finetuning-roberta-base-on-sst2 results: - task: name: Text Classification type: text-classification dataset: name: sst2 type: sst2 config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9415137614678899 - name: F1 type: f1 value: 0.9425028184892897 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-roberta-base-on-sst2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2207 - Accuracy: 0.9415 - F1: 0.9425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
abaheti95/a_lol_seq_good_prioirty_qlora_hh
abaheti95
2023-10-04T14:00:49Z
0
0
null
[ "arxiv:2305.14718", "region:us" ]
null
2023-10-04T13:23:33Z
## HH-RLHF QLoRA adapters trained with Advantage Leftover Lunch RL Sequence (A-LoL seq.) | [Paper](https://arxiv.org/abs/2305.14718) | [Code](https://github.com/abaheti95/LoL-RL) | ### Model description We continued QLoRA finetuning on [HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf) starting from [timdettmers/qlora-hh-rlhf-7b](https://huggingface.co/timdettmers/qlora-hh-rlhf-7b) for 1 epoch while only the "chosen" responses and removing the "rejected" responses from the training split. Even within the chosen responses, our method Advantage Leftover Lunch RL (A-LoL), inherently finds 33% of the responses as negative advantage and thus discards them as unfit for training. Despite the low number of training examples, the final adapter trained with A-LoL seq. is able to generate most diverse, safe and helpful responses compared to the baselines. ⚠️ These models are purely intended for research purposes and could produce problematic outputs. ### Usage ```python tokenizer = AutoTokenizer.from_pretrained("huggyllama/llama-7b") tokenizer.truncation_side = "left" tokenizer.padding_side = "left" base_model = AutoModelForCausalLM.from_pretrained( "huggyllama/llama-7b", load_in_4bit=True, torch_dtype=torch.float16, device_map="auto", max_memory= {i: '40000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(base_model, "abaheti95/a_lol_seq_good_prioirty_qlora_hh") prefix = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n### Human: How would I go about making homemade chewing gum? ### Assistant: Well, it\u2019s pretty easy, but you can either make it a bit better or more interesting if you use xanthan gum, which is a commonly used thickener. Xanthan gum is added to food products as a thickener, gelling agent, emulsifier, and stabilizer, which means that it allows sugar syrups to form gels. If you can get hold of some, you could use it to make really interesting gum! ### Human: Okay what do I do when I have the xanthan gum? What else do I need to add? ### Assistant: Well, the important ingredient is the gums. You can find many different types at your local pharmacy, which should be able to help you figure out what you need. You will need: 1) a mixer, 2) sugar, 3) xanthan gum, 4) flavoring, 5) gum base, and 6) a dehydrator, which is a special oven that dries food. ### Human: Why would I need a dehydrator to make gum? ### Assistant:" current_batch_prefixes_inputs = tokenizer([prefix], max_length = 768 - 128,truncation = True,add_special_tokens=True, padding = True, return_tensors="pt").to(model.device) # 835 = "###" predicted_sents = model.generate(**current_batch_prefixes_inputs, max_new_tokens=128,pad_token_id=tokenizer.pad_token_id,num_beams=1,do_sample=False,num_return_sequences = 1, eos_token_id=[835, tokenizer.eos_token_id]) repsonse_tokens = predicted_sents[:, current_batch_prefixes_inputs['input_ids'].shape[-1]:] responses = tokenizer.batch_decode(repsonse_tokens, skip_special_tokens=True) # Normalize responses responses_normalized = [resp.split("\n Human:")[0].split("\nHuman:")[0].split("\n### Human")[0].strip() for resp in responses] responses_normalized = [resp.replace("###", "").strip() if resp.endswith("###") else resp.strip() for resp in responses_normalized] ``` We also show the evaluation results of the model on the test set in file: `harmless_base_eval_results.jsonl`, `helpful_base_eval_results.jsonl`, `helpful_online_eval_results.jsonl` and `helpful_rejection_eval_results.jsonl`. ### Framework version and configuration - PEFT 0.5.0 The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16
divy31245/svnit-500-100
divy31245
2023-10-04T13:53:25Z
0
0
peft
[ "peft", "arxiv:1910.09700", "region:us" ]
null
2023-10-04T13:52:34Z
--- library_name: peft base_model: decapoda-research/llama-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
c-g/a2c-PandaReachDense-v3
c-g
2023-10-04T13:40:05Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T13:34:41Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.25 +/- 0.11 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Jaynm31245/svnit-300
Jaynm31245
2023-10-04T13:38:21Z
0
0
peft
[ "peft", "arxiv:1910.09700", "region:us" ]
null
2023-10-04T13:37:28Z
--- library_name: peft base_model: decapoda-research/llama-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
Yacong/ru-lora-trained-xl
Yacong
2023-10-04T13:38:11Z
3
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-10-04T13:04:15Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of ru doll tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Yacong/ru-lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of ru doll using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True. Special VAE used for training: None.
TheBloke/Dans-TotSirocco-7B-GPTQ
TheBloke
2023-10-04T13:32:37Z
18
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "base_model:Dans-Archive/Dans-TotSirocco-7b", "base_model:quantized:Dans-Archive/Dans-TotSirocco-7b", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-04T12:42:16Z
--- base_model: PocketDoc/Dans-TotSirocco-7b inference: false language: - en model_creator: PocketDoc Labs model_name: Dans TotSirocco 7B model_type: mistral prompt_template: '<|system|>{system_message}<|user|>{prompt}<|model|> ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Dans TotSirocco 7B - GPTQ - Model creator: [PocketDoc Labs](https://huggingface.co/PocketDoc) - Original model: [Dans TotSirocco 7B](https://huggingface.co/PocketDoc/Dans-TotSirocco-7b) <!-- description start --> ## Description This repo contains GPTQ model files for [PocketDoc Labs's Dans TotSirocco 7B](https://huggingface.co/PocketDoc/Dans-TotSirocco-7b). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GGUF) * [PocketDoc Labs's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PocketDoc/Dans-TotSirocco-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Metharme ``` <|system|>{system_message}<|user|>{prompt}<|model|> ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 32768 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Dans-TotSirocco-7B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Dans-TotSirocco-7B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Dans-TotSirocco-7B-GPTQ`: ```shell mkdir Dans-TotSirocco-7B-GPTQ huggingface-cli download TheBloke/Dans-TotSirocco-7B-GPTQ --local-dir Dans-TotSirocco-7B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Dans-TotSirocco-7B-GPTQ huggingface-cli download TheBloke/Dans-TotSirocco-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir Dans-TotSirocco-7B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Dans-TotSirocco-7B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dans-TotSirocco-7B-GPTQ --local-dir Dans-TotSirocco-7B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Dans-TotSirocco-7B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Dans-TotSirocco-7B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Dans-TotSirocco-7B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Dans-TotSirocco-7B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Dans-TotSirocco-7B-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|system|>{system_message}<|user|>{prompt}<|model|> ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Dans-TotSirocco-7B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|system|>{system_message}<|user|>{prompt}<|model|> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: PocketDoc Labs's Dans TotSirocco 7B ### Description: This model is to test some of my data projects. It is a multipurpose chat / chat instruct hybrid model and can be seen as a prototype of Dan's PersonalityEngine Mk. 2. It has been trained on a wide array of one shot instructions, multi round instructions, role playing scenarios, and text adventure games. It is based on [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Prompt format: Pygmalion / Metharme The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired. ``` <|system|>system message here<|user|>user message here<|model|> ``` ``` <|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|> ``` ``` <|system|>system message here<|model|> ``` ``` <|system|>system message here<|model|>model message<|user|>user message here<|model|> ``` #### Some example prompts and their resulting outputs: ##### Prompt: ``` <|system|>You are an assistant, help the user and use markdown when applicable.<|user|>What are the top games of the 90s?<|model|> ``` ##### Output: ``` The 1990s was a golden era for video gaming, with countless iconic titles released across various platforms. Here's a list of some of the most popular and influential games from that decade: - **Super Mario World** (SNES) - Released in 1990, this classic platformer introduced fans to Yoshi and featured memorable levels like Donut Plains and Star World. - **Mortal Kombat** (Arcade/Sega Genesis) - This gory fighting game shocked audiences upon its release in 1992 with its violent finishers and secret characters like Sub-Zero and Scorpion. - **Doom** (PC) - Id Software's groundbreaking first-person shooter revolutionized the genre by introducing fast-paced action and multiplayer deathmatches. - **Street Fighter II** (Arcade/SNES) - Capcom's sequel to their original Street Fighter arcade game added new fighters such as Ryu's rival Ken Masters plus complex special moves like Dragon Punch and Hadouken. - **Crash Bandicoot** (PlayStation) - Naughty Dog's mascot platforming series debuted in 1996 starring Crash who had to save his girlfriend Tawna from Dr Cortex's evil experiments on Aku Aku masks. - **Final Fantasy VII** (PlayStation) - Square Enix's seventh installment took players through Midgar City where they joined Cloud Strife's eco-terrorist group AVALANCHE against Shinra Electric Power Company led by Sephiroth. - **Resident Evil** (PlayStation) - The survival horror masterpiece created by Capcom saw Chris Redfield investigating Spencer Mansion alongside Jill Valentine while trying not get eaten alive by zombies! ``` ##### Prompt: ``` <|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|> ``` ##### Output: ``` The roar of the engines shook the very foundations of the planet as the massive warships descended upon Alpha Centauri. The air was thick with the stench of burning fuel and smoke, as if the entire world had been set ablaze. In the chaos, soldiers were scrambling for cover, desperately trying to protect themselves from the relentless barrage of firepower unleashed by their enemy. Amidst the carnage, one soldier stood tall, his heart pounding against his chest like a drumbeat of death. He knew he had to do something, anything, to stop this nightmare from becoming reality. With trembling hands, he reached for his weapon, determined to make a stand against the overwhelming force that threatened to consume them all. As the battle raged around him, the soldier felt a sudden surge of adrenaline coursing through his veins. It was almost as if some hidden power within him had awoken, granting him strength beyond measure. His eyes blazed with determination as he raised his gun and took aim at the first target he saw: a bear-like creature, its fur matted with blood and gore, advancing towards him with murderous intent. With a deafening crack, the soldier fired off several rounds, striking the beast square in the chest. The impact sent it reeling backwards, but it did not fall. Instead, it let out a ferocious growl before charging forward once more. The soldier gritted his teeth and pulled the trigger again, emptying his clip into the creature's already battered frame. But even as the bullets tore through its flesh, the beast seemed unfazed, continuing its advance towards the terrified soldier. Desperate for any advantage, he threw himself to the side just moments before the monster slammed into the spot where he had been standing mere seconds ago. As it turned back towards him, ready to deliver another brutal blow, the soldier could see that its eyes glowed with an otherworldly light. ``` ##### Prompt: ``` <|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|> ``` ##### Output: ``` I hesitated before making my decision, knowing full well it was not wise. However, in the end, I chose to proceed regardless. ``` # Some quick and dirty training details: - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Sequence length: 4096 - Training time: 4 hours - Hardware: 2x RTX 4090 - Training type: QLoRA - PEFT R/A: 32/32 # Credits: ### Skein Text Adventure Data: Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities.
CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r4-q_k_v_o
CHIH-HUNG
2023-10-04T13:31:44Z
1,488
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:huangyt/FINETUNE4", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-20T22:23:57Z
--- license: llama2 datasets: - huangyt/FINETUNE4 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> 在llama-2-13b上使用huangyt/FINETUNE4資料集進行訓練,總資料筆數約3.8w # Fine-Tuning Information - **GPU:** RTX4090 (single core / 24564MiB) - **model:** meta-llama/Llama-2-13b-hf - **dataset:** huangyt/FINETUNE3 (共約3.8w筆訓練集) - **peft_type:** LoRA - **lora_rank:** 16 - **lora_target:** q_proj, k_proj, v_proj, o_proj - **per_device_train_batch_size:** 8 - **gradient_accumulation_steps:** 8 - **learning_rate :** 4e-4 - **epoch:** 1 - **precision:** bf16 - **quantization:** load_in_4bit # Fine-Tuning Detail - **train_loss:** 0.579 - **train_runtime:** 4:6:11 (use deepspeed) # Evaluation - 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA** - 評估結果使用**本地**所測的分數,並使用load_in_8bit | Model |Average| ARC |HellaSwag| MMLU | TruthfulQA | |-----------------------------------------|-------|-------|---------|-------|------------| | FINETUNE4_3.8w-r4-q_k_v_o | 56.67 | 52.13 | 79.38 | 54.54 | 40.64 | | FINETUNE4_3.8w-r8-q_k_v_o | 56.84 | 52.30 | 79.58 | 54.50 | 40.98 | | FINETUNE4_3.8w-r16-q_k_v_o | 57.28 | 53.92 | 79.92 | 55.61 | 39.65 | | FINETUNE4_3.8w-r4-gate_up_down | 55.93 | 51.71 | 79.13 | 53.24 | 39.63 | | FINETUNE4_3.8w-r8-gate_up_down | 55.93 | 51.37 | 79.29 | 53.62 | 39.45 | | FINETUNE4_3.8w-r16-gate_up_down | 56.35 | 52.56 | 79.28 | 55.27 | 38.31 | | FINETUNE4_3.8w-r4-q_k_v_o_gate_up_down | 56.42 | 53.92 | 79.09 | 53.93 | 38.74 | | FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down | 56.11 | 51.02 | 79.24 | 53.11 | 41.08 | | FINETUNE4_3.8w-r16-q_k_v_o_gate_up_down | 56.83 | 53.67 | 79.49 | 54.79 | 39.36 | ------------------------------------------------------------------------------------------ - 評估結果來自**HuggingFaceH4/open_llm_leaderboard** | Model |Average| ARC |HellaSwag| MMLU | TruthfulQA | |-----------------------------------------|-------|-------|---------|-------|------------| | FINETUNE4_3.8w-r4-q_k_v_o | 57.98 | 54.78 | 81.4 | 54.73 | 41.02 | | FINETUNE4_3.8w-r8-q_k_v_o | 58.96 | 57.68 | 81.91 | 54.95 | 41.31 | | FINETUNE4_3.8w-r16-q_k_v_o | 58.46 | 56.23 | 81.98 | 55.87 | 39.76 | | FINETUNE4_3.8w-r4-gate_up_down | 57.94 | 55.8 | 81.74 | 55.09 | 39.12 | | FINETUNE4_3.8w-r8-gate_up_down | 57.85 | 54.35 | 82.13 | 55.33 | 39.6 | | FINETUNE4_3.8w-r16-gate_up_down | 57.93 | 55.03 | 81.97 | 56.64 | 38.07 | | FINETUNE4_3.8w-r4-q_k_v_o_gate_up_down | 58.04 | 56.31 | 81.43 | 55.3 | 39.11 | | FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down | 58.16 | 55.97 | 81.53 | 54.42 | 40.72 | | FINETUNE4_3.8w-r16-q_k_v_o_gate_up_down | 58.61 | 57.25 | 81.49 | 55.9 | 39.79 | # How to convert dataset to json - 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料 - 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response) - 最後指定json檔儲存位置 (**json_filename**) ```py import json from datasets import load_dataset # 讀取數據集,take可以取得該數據集前n筆資料 dataset = load_dataset("huangyt/FINETUNE4", split="train", streaming=True) # 提取所需欄位並建立新的字典列表 extracted_data = [] for example in dataset: extracted_example = { "instruction": example["instruction"], "input": example["input"], "output": example["output"] } extracted_data.append(extracted_example) # 指定 JSON 文件名稱 json_filename = "FINETUNE4.json" # 寫入 JSON 文件 with open(json_filename, "w") as json_file: json.dump(extracted_data, json_file, indent=4) print(f"數據已提取並保存為 {json_filename}") ```
Pharabino/lapo_lapo
Pharabino
2023-10-04T13:30:34Z
0
0
null
[ "license:other", "region:us" ]
null
2023-10-04T13:30:34Z
--- license: other license_name: pharabino license_link: LICENSE ---
Msughterx/wav2vec2-base-igbo
Msughterx
2023-10-04T13:29:16Z
75
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-30T07:35:48Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer - accuracy - f1 - recall model-index: - name: wav2vec2-base-igbo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-igbo This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9388 - Wer Ortho: 145.8015 - Wer: 145.4198 - Accuracy: 0.0 - F1: 0.0 - Recall: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | Accuracy | F1 | Recall | |:-------------:|:-----:|:----:|:---------------:|:---------:|:--------:|:--------:|:---:|:------:| | 0.0004 | 50.0 | 500 | 2.9388 | 145.8015 | 145.4198 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o
CHIH-HUNG
2023-10-04T13:28:39Z
1,493
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:huangyt/FINETUNE3", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-19T17:42:51Z
--- license: llama2 datasets: - huangyt/FINETUNE3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> 在llama-2-13b上使用huangyt/FINETUNE3資料集進行訓練,總資料筆數約3.3w # Fine-Tuning Information - **GPU:** RTX4090 (single core / 24564MiB) - **model:** meta-llama/Llama-2-13b-hf - **dataset:** huangyt/FINETUNE3 (共約3.3w筆訓練集) - **peft_type:** LoRA - **lora_rank:** 16 - **lora_target:** q_proj, k_proj, v_proj, o_proj - **per_device_train_batch_size:** 8 - **gradient_accumulation_steps:** 8 - **learning_rate :** 4e-4 - **epoch:** 1 - **precision:** bf16 - **quantization:** load_in_4bit # Fine-Tuning Detail - **train_loss:** 0.579 - **train_runtime:** 4:6:11 (use deepspeed) # Evaluation - 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA** - 評估結果使用**本地**所測的分數,並使用load_in_8bit | Model |Average| ARC |HellaSwag| MMLU | TruthfulQA | |-----------------------------------------|-------|-------|---------|-------|------------| | FINETUNE3_3.3w-r4-q_k_v_o | 56.29 | 54.27 | 79.42 | 51.90 | 39.58 | | FINETUNE3_3.3w-r8-q_k_v_o | 56.53 | 52.99 | 79.45 | 53.53 | 40.14 | | FINETUNE3_3.3w-r16-q_k_v_o | 56.25 | 53.24 | 79.53 | 54.03 | 38.20 | | FINETUNE3_3.3w-r4-gate_up_down | 55.79 | 51.02 | 79.37 | 53.36 | 39.40 | | FINETUNE3_3.3w-r8-gate_up_down | 56.60 | 53.33 | 79.43 | 53.60 | 40.03 | | FINETUNE3_3.3w-r16-gate_up_down | 56.34 | 51.88 | 79.42 | 54.64 | 39.44 | | FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down | 56.67 | 53.07 | 79.34 | 54.07 | 40.19 | | FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down | 56.93 | 54.61 | 79.16 | 53.51 | 40.46 | | FINETUNE3_3.3w-r16-q_k_v_o_gate_up_down | 57.78 | 53.92 | 79.41 | 54.68 | 43.09 | ------------------------------------------------------------------------------------------- - 評估結果來自**HuggingFaceH4/open_llm_leaderboard** | Model |Average| ARC |HellaSwag| MMLU | TruthfulQA | |-----------------------------------------|-------|-------|---------|-------|------------| | FINETUNE3_3.3w-r4-q_k_v_o | 58.34 | 59.04 | 81.15 | 53 | 40.16 | | FINETUNE3_3.3w-r8-q_k_v_o | 58.28 | 56.06 | 81.89 | 55.04 | 40.12 | | FINETUNE3_3.3w-r16-q_k_v_o | 58.55 | 59.3 | 81.2 | 55.58 | 38.13 | | FINETUNE3_3.3w-r4-gate_up_down | 57.79 | 56.4 | 81.93 | 53.63 | 39.23 | | FINETUNE3_3.3w-r8-gate_up_down | 58.17 | 57.25 | 81.79 | 53.96 | 39.66 | | FINETUNE3_3.3w-r16-gate_up_down | 58.91 | 58.7 | 81.89 | 56.08 | 38.95 | | FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down | 58.42 | 57.76 | 80.78 | 54.32 | 40.8 | | FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down | 58.26 | 57.94 | 81.19 | 53.43 | 40.48 | | FINETUNE3_3.3w-r16-q_k_v_o_gate_up_down | 59.62 | 59.22 | 81.52 | 54.94 | 42.83 | # How to convert dataset to json - 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料 - 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response) - 最後指定json檔儲存位置 (**json_filename**) ```py import json from datasets import load_dataset # 讀取數據集,take可以取得該數據集前n筆資料 dataset = load_dataset("huangyt/FINETUNE3", split="train", streaming=True) # 提取所需欄位並建立新的字典列表 extracted_data = [] for example in dataset: extracted_example = { "instruction": example["instruction"], "input": example["input"], "output": example["output"] } extracted_data.append(extracted_example) # 指定 JSON 文件名稱 json_filename = "FINETUNE3.json" # 寫入 JSON 文件 with open(json_filename, "w") as json_file: json.dump(extracted_data, json_file, indent=4) print(f"數據已提取並保存為 {json_filename}") ```
deepanshu30699/wizard-python-financial_6_gptq
deepanshu30699
2023-10-04T13:19:16Z
6
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-10-04T07:45:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: gptq - bits: 4 - tokenizer: None - dataset: None - group_size: 128 - damp_percent: 0.1 - desc_act: False - sym: True - true_sequential: True - use_cuda_fp16: False - model_seqlen: None - block_name_to_quantize: None - module_name_preceding_first_block: None - batch_size: 1 - pad_token_id: None - disable_exllama: True - max_input_length: None ### Framework versions - PEFT 0.5.0
dimcall/xlm-roberta-base-finetuned-panx-de-fr
dimcall
2023-10-04T13:18:35Z
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T13:10:28Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1655 - F1: 0.8583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2984 | 1.0 | 715 | 0.1808 | 0.8222 | | 0.1478 | 2.0 | 1430 | 0.1621 | 0.8463 | | 0.0985 | 3.0 | 2145 | 0.1655 | 0.8583 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.11.0+cu115 - Datasets 1.16.1 - Tokenizers 0.14.0
SJahanzad/3dbloom
SJahanzad
2023-10-04T13:17:23Z
1
0
peft
[ "peft", "region:us" ]
null
2023-10-04T13:17:19Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
sophiaaez/distilhubert-finetuned-ravdess-finetuned-gtzan
sophiaaez
2023-10-04T13:10:59Z
165
0
transformers
[ "transformers", "pytorch", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:pollner/distilhubert-finetuned-ravdess", "base_model:finetune:pollner/distilhubert-finetuned-ravdess", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2023-10-04T11:29:58Z
--- license: apache-2.0 base_model: pollner/distilhubert-finetuned-ravdess tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-ravdess-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.82 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-ravdess-finetuned-gtzan This model is a fine-tuned version of [pollner/distilhubert-finetuned-ravdess](https://huggingface.co/pollner/distilhubert-finetuned-ravdess) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 1.0115 - Accuracy: 0.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2891 | 1.0 | 113 | 1.1911 | 0.58 | | 1.0882 | 2.0 | 226 | 1.0632 | 0.64 | | 0.5454 | 3.0 | 339 | 0.7916 | 0.8 | | 0.5953 | 4.0 | 452 | 0.9244 | 0.71 | | 0.2773 | 5.0 | 565 | 0.8284 | 0.79 | | 0.1933 | 6.0 | 678 | 1.0999 | 0.75 | | 0.1545 | 7.0 | 791 | 0.8734 | 0.82 | | 0.0123 | 8.0 | 904 | 0.8838 | 0.82 | | 0.1267 | 9.0 | 1017 | 0.9685 | 0.83 | | 0.0058 | 10.0 | 1130 | 1.0115 | 0.82 | ### Framework versions - Transformers 4.35.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
napatswift/mt5-base-th-budget-seq
napatswift
2023-10-04T13:02:38Z
105
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-03T08:55:02Z
--- tags: - generated_from_trainer model-index: - name: mt5-base-th-budget-seq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-th-budget-seq This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0780 - eval_runtime: 6.1204 - eval_samples_per_second: 6.536 - eval_steps_per_second: 6.536 - epoch: 1.65 - step: 603 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 10 ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
petergriger/atari_dqn
petergriger
2023-10-04T12:54:59Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T12:54:24Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 589.00 +/- 65.87 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga petergriger -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga petergriger -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga petergriger ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Yacong/allu-lora-trained-xl
Yacong
2023-10-04T12:49:25Z
6
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-10-04T12:15:51Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of allu doll tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - Yacong/allu-lora-trained-xl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of allu doll using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: True. Special VAE used for training: None.
andreabac3/Open_Fauno-Italian-LLM-7bB
andreabac3
2023-10-04T12:44:35Z
2
0
peft
[ "peft", "region:us" ]
null
2023-10-04T12:44:34Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
Vishal24/function-calling-adapters-v4
Vishal24
2023-10-04T12:38:33Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-10-04T12:38:24Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
dimcall/xlm-roberta-base-finetuned-panx-de
dimcall
2023-10-04T12:26:50Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T12:19:22Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8572726756403704 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1403 - F1: 0.8573 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2598 | 1.0 | 525 | 0.1716 | 0.8162 | | 0.1294 | 2.0 | 1050 | 0.1431 | 0.8432 | | 0.0826 | 3.0 | 1575 | 0.1403 | 0.8573 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.11.0+cu115 - Datasets 1.16.1 - Tokenizers 0.14.0
ssarae/dreambooth_kuromi_ver
ssarae
2023-10-04T12:24:48Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-10-04T09:07:49Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: A znfhal kuromi tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - ssarae/dreambooth_kuromi_ver These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on A znfhal kuromi using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
michaelsinanta/smoke_detector
michaelsinanta
2023-10-04T12:24:40Z
193
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:smokedataset", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-04T10:39:10Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - smokedataset metrics: - accuracy model-index: - name: smoke_detector results: - task: name: Image Classification type: image-classification dataset: name: smokedataset type: smokedataset config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9951117318435754 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smoke_detector This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the smokedataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0187 - Accuracy: 0.9951 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1404 | 1.0 | 716 | 0.0396 | 0.9902 | | 0.0493 | 2.0 | 1432 | 0.0337 | 0.9920 | | 0.0237 | 3.0 | 2148 | 0.0263 | 0.9934 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
rafelsiregar/alexnet_model_pretrained
rafelsiregar
2023-10-04T12:18:25Z
0
0
null
[ "biology", "image-classification", "en", "region:us" ]
image-classification
2023-10-03T16:08:06Z
--- language: - en metrics: - accuracy pipeline_tag: image-classification tags: - biology ---
erenfazlioglu/whisper-small-turkish-tr-best
erenfazlioglu
2023-10-04T12:16:30Z
104
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-10-04T11:51:56Z
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-small-tr-best results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-tr-best This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3166 - Wer: 26.3414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2521 | 0.89 | 1000 | 0.4176 | 37.0010 | | 0.1283 | 1.77 | 2000 | 0.3558 | 30.5661 | | 0.0512 | 2.66 | 3000 | 0.3270 | 29.3765 | | 0.0151 | 3.54 | 4000 | 0.3166 | 26.3414 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
Jaynm31245/svnit-chatbot
Jaynm31245
2023-10-04T12:12:56Z
0
0
peft
[ "peft", "arxiv:1910.09700", "region:us" ]
null
2023-10-04T12:09:31Z
--- library_name: peft base_model: decapoda-research/llama-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.0.dev0
Yntec/3DCute
Yntec
2023-10-04T12:07:54Z
53
1
diffusers
[ "diffusers", "safetensors", "3D", "aodai", "Character", "StableDiffusionVN", "text-to-image", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-04T10:57:40Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - 3D - aodai - Character - StableDiffusionVN - text-to-image - stable-diffusion - stable-diffusion-diffusers - diffusers --- # SDVN4-3DCuteVN This model with the MoistMixV2 VAE baked in. Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/fDknpQxubFZL_7u4MxskU.png) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/lk636ivh4JnulxYvQAhYQ.png) PRETTY CUTE GIRL BY ROSSDRAWS. An extradimensional creature buying donuts. curly hair. Pixar animation. Original page: https://civitai.com/models/103169/sdvn4-3dcutevn
DamarJati/Face-Mask-Detection
DamarJati
2023-10-04T11:47:51Z
253
2
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-04T06:40:55Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9991525423728813 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Face-Mask-Detection This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0051 - Accuracy: 0.9992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0344 | 1.0 | 83 | 0.0051 | 0.9992 | | 0.0112 | 2.0 | 166 | 0.0052 | 0.9983 | | 0.0146 | 3.0 | 249 | 0.0045 | 0.9992 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
TheBloke/Dans-AdventurousWinds-7B-GGUF
TheBloke
2023-10-04T11:47:08Z
167
4
transformers
[ "transformers", "gguf", "mistral", "en", "base_model:Dans-DiscountModels/Dans-AdventurousWinds-7b", "base_model:quantized:Dans-DiscountModels/Dans-AdventurousWinds-7b", "region:us" ]
null
2023-10-04T11:31:40Z
--- base_model: PocketDoc/Dans-AdventurousWinds-7b inference: false language: - en model_creator: PocketDoc Labs model_name: Dans AdventurousWinds 7B model_type: mistral prompt_template: '[Genres: Science Fiction] [Tags: humor, old school, sci fi] [Mode: Adventure] [Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.] [Misc: Writing era: 1993] [Intro] It is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn''t care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how... > {prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Dans AdventurousWinds 7B - GGUF - Model creator: [PocketDoc Labs](https://huggingface.co/PocketDoc) - Original model: [Dans AdventurousWinds 7B](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [PocketDoc Labs's Dans AdventurousWinds 7B](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF) * [PocketDoc Labs's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Adventure ``` [Genres: Science Fiction] [Tags: humor, old school, sci fi] [Mode: Adventure] [Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.] [Misc: Writing era: 1993] [Intro] It is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn't care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how... > {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [dans-adventurouswinds-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [dans-adventurouswinds-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [dans-adventurouswinds-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [dans-adventurouswinds-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [dans-adventurouswinds-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [dans-adventurouswinds-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [dans-adventurouswinds-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [dans-adventurouswinds-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [dans-adventurouswinds-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [dans-adventurouswinds-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [dans-adventurouswinds-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [dans-adventurouswinds-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Dans-AdventurousWinds-7B-GGUF/blob/main/dans-adventurouswinds-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Dans-AdventurousWinds-7B-GGUF and below it, a specific filename to download, such as: dans-adventurouswinds-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Dans-AdventurousWinds-7B-GGUF dans-adventurouswinds-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Dans-AdventurousWinds-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Dans-AdventurousWinds-7B-GGUF dans-adventurouswinds-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m dans-adventurouswinds-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[Genres: Science Fiction]\n[Tags: humor, old school, sci fi]\n[Mode: Adventure]\n[Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.]\n[Misc: Writing era: 1993]\n[Intro]\nIt is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn't care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how...\n\n> {prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Dans-AdventurousWinds-7B-GGUF", model_file="dans-adventurouswinds-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: PocketDoc Labs's Dans AdventurousWinds 7B # What is the model for? This model is proficient in crafting text-based adventure games. It can both concise replies and more expansive, novel-like descriptions. The ability to alternate between these two response styles can be triggered by a distinct system message. # What's in the sausage? This model was trained on [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) using a custom version of the floyd and cys datasets from the skein text adventure dataset. The dataset has been improved over what was used in my previous model "Dans-CreepingSenseOfDoom" and this time presented in the more traditional adventure mode format. ## Prompt format and examples: #### Format description: Traditional adventure format with a description of the genre, tags, and other misc information at the top followed by an optional introductory message. User messages are prefixed with two new lines and "> ", model messages are one new line after the last user message or the description message if an introductory message is desired. #### Examples of shorter responses: ``` [Genres: Science Fiction] [Tags: humor, old school, sci fi] [Mode: Adventure] [Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.] [Misc: Writing era: 1993] [Intro] It is the year 2045. You are a young man in his twenties living in New York City. Your father was an inventor who died when you were very small; your mother raised you alone for many years until she remarried. Now you live with your stepfather, but he doesn't care much for you and has never given you any money to help support yourself. You have no job and little hope of getting one because of your lack of experience. However, you do have some unusual abilities that could be put to good use if only you knew how... > Look around You are standing on the sidewalk outside your apartment building. The street is busy with traffic and people going about their daily business. Across the street from you stands a tall office tower where your stepfather works as an accountant. To your left is a convenience store where you can buy food and drinks. > You go into the store The doorbell rings as you enter the store. An elderly woman behind the counter looks up at you expectantly. "Hello," she says cheerfully, "what can I get for you?" > You tell her you'll browse "All right then," she replies brightly, "just take your time." > You ask if she's hiring "I'm sorry," she says sadly, "but we don't need anyone new just now." > You leave without buying anything As you walk out of the store, the bell jingles merrily. > You go see your stepfather When you arrive at work, your stepfather greets you warmly. He invites you into his office and offers you a seat. "So," he begins eagerly, "how would you like to make some extra cash? There's this guy named Mr. Jones who owes my company thousands of dollars. If we could somehow convince him to pay us back, it would really help our bottom line." He pauses expectantly, waiting for your response. > You ask him whats involved "Well," he continues excitedly, "it turns out that Mr. Jones has a very valuable antique watch that he keeps locked away in a safe deposit box at the bank. We think that if we could somehow break into the box and steal the watch, we could sell it for enough money to cover his debt." Your heart races as you realize the potential danger of such a scheme. But before you can say anything, your stepfather cuts off your thoughts by continuing, "Of course, there's always the chance that something could go wrong and we might end up losing everything..." ``` #### Examples of longer novel like responses: ``` [Genres: grimdark fantasy] [Tags: Fantasy, Grimdark, Serious, Villain Protagonist, War] [Description: This story is definitely more "story" than "game", however there are two distinct paths that can be followed, each with their own pitfalls and additional choices. This story can get long, and assuming you're surviving, years do pass in it and you and your environment will change.] [First message intro] [Mode: Interactive Storyteller] You have been a soldier for as long as you remember. You don't know who you were before the war began, but you were someone else. Someone less angry. More forgiving. Perhaps even kinder. It all started when you were very young. The King of this land was attacked by an army from another kingdom. His forces were outmatched, his city overrun. He had no choice but to flee into the forest, taking what few soldiers he could muster along with him. As the king fled deeper into the woods, his army fell back, forming a protective wall around them. They fought valiantly against the invaders, giving the king time to escape, but they eventually succumbed to overwhelming numbers. When the fighting finally ended, only one man remained alive; you. The king found you hiding behind some bushes, covered in blood and dirt. He took pity on you and offered you a place at court if you would swear allegiance to him. Without hesitation, you accepted. And so began your journey down the path of darkness... > You meet with the king to discuss the plans A week has passed since the battle and you have just met with the king to discuss future plans. It seems like things are finally starting to settle down after such chaos. However, there is still much work to be done before we can truly consider ourselves safe again. The first order of business is rebuilding our army. We need men willing to fight for us once more. Men who understand what needs to be done and aren't afraid to die trying. I believe I have found such warriors among those loyal subjects who survived the initial attack. If we train them properly, then perhaps they might prove useful against our enemies next time around. Next, we must find ways to improve our defenses. There are many areas where improvements could be made. For instance, we should construct better walls around our cities. These walls must not only keep intruders out, but also allow us easy access inside should anyone try to breach them. In addition, we must strengthen our gates so that no matter how strong our enemy becomes, he cannot break through without great effort. Finally, we must fortify our castles and other strategic locations throughout the country. By doing these things, we will ensure that nobody ever attacks us again! > You ask what he desires your involvement to be After hearing about his plans, you feel compelled to help him achieve victory. After all, you owe everything to him. But before agreeing to anything, you want to make sure that he understands exactly what you expect from him. So you decide to approach him directly. "Your Majesty," you begin, "I am honored beyond measure to serve under your command. However, before committing myself fully to this cause, I wish to clarify certain matters." He nods slowly. "Very well. What would you like me to explain?" "Well," you say, "first off, I want to know why you chose me above everyone else? Why did you pick me specifically?" He smiles softly. "Because I saw potential within you. Potential which others lacked. Potentials which could prove valuable during times of crisis. Therefore, I felt confident enough to entrust my fate to you." "But what makes you think I possess such abilities?" you ask curiously. "Oh, nothing special really," he replies casually. "Just raw determination combined with intelligence and resourcefulness. Those qualities alone are enough to carry us forward toward victory!" ``` # Some quick and dirty training details: - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Sequence length: 4096 - \# of epochs: 3 - Training time: 1 hour - Hardware: 1x RTX 3090 - Training type: QLoRA - PEFT R/A: 32/32 # Credits: ### Skein Text Adventure Data: Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities. <!-- original-model-card end -->
vodiylik/xls-r-uzbek-cv10-full
vodiylik
2023-10-04T11:31:50Z
96
3
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "pretraining", "automatic-speech-recognition", "mozilla-foundation/common_voice_10_0", "generated_from_trainer", "uz", "dataset:common_voice_10_0", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-08-12T09:51:59Z
--- language: - uz license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_10_0 - generated_from_trainer datasets: - common_voice_10_0 base_model: facebook/wav2vec2-xls-r-300m model-index: - name: xls-r-uzbek-cv10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-uzbek-cv10 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_10_0 - UZ dataset. It achieves the following results on the evaluation set: - Loss: 0.2491 - Wer: 0.2588 - Cer: 0.0513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:------:|:---------------:|:------:| | 3.1215 | 0.68 | 500 | 1.0 | 3.1188 | 1.0 | | 2.8562 | 1.36 | 1000 | 0.9689 | 2.5724 | 1.0002 | | 1.2709 | 2.04 | 1500 | 0.1471 | 0.6278 | 0.6478 | | 1.0817 | 2.72 | 2000 | 0.1304 | 0.4989 | 0.5931 | | 0.9801 | 3.4 | 2500 | 0.1225 | 0.4582 | 0.5667 | | 0.951 | 4.08 | 3000 | 0.1149 | 0.4239 | 0.5381 | | 0.8834 | 4.76 | 3500 | 0.1092 | 0.4016 | 0.5158 | | 0.857 | 5.44 | 4000 | 0.1047 | 0.3785 | 0.4992 | | 0.8307 | 6.12 | 4500 | 0.1004 | 0.3720 | 0.4811 | | 0.805 | 6.8 | 5000 | 0.0937 | 0.3450 | 0.4537 | | 0.7828 | 7.48 | 5500 | 0.0912 | 0.3421 | 0.4460 | | 0.7789 | 8.16 | 6000 | 0.0890 | 0.3295 | 0.4337 | | 0.755 | 8.84 | 6500 | 0.0862 | 0.3257 | 0.4222 | | 0.7464 | 9.52 | 7000 | 0.0847 | 0.3269 | 0.4155 | | 0.7293 | 10.2 | 7500 | 0.0823 | 0.3121 | 0.4025 | | 0.7283 | 10.88 | 8000 | 0.0789 | 0.2991 | 0.3941 | | 0.7145 | 11.56 | 8500 | 0.0786 | 0.2961 | 0.3868 | | 0.6963 | 12.24 | 9000 | 0.0767 | 0.2972 | 0.3784 | | 0.6981 | 12.92 | 9500 | 0.0757 | 0.2880 | 0.3750 | | 0.6888 | 13.6 | 10000 | 0.0745 | 0.2865 | 0.3703 | | 0.6733 | 14.29 | 10500 | 0.0744 | 0.2887 | 0.3663 | | 0.6701 | 14.97 | 11000 | 0.0735 | 0.2857 | 0.3624 | | 0.6634 | 15.65 | 11500 | 0.0723 | 0.2822 | 0.3581 | | 0.6484 | 16.33 | 12000 | 0.0706 | 0.2778 | 0.3503 | | 0.6626 | 17.01 | 12500 | 0.0697 | 0.2697 | 0.3477 | | 0.6341 | 17.69 | 13000 | 0.0708 | 0.2804 | 0.3511 | | 0.6402 | 18.37 | 13500 | 0.0681 | 0.2665 | 0.3418 | | 0.6343 | 19.05 | 14000 | 0.0687 | 0.2748 | 0.3425 | | 0.6383 | 19.73 | 14500 | 0.0677 | 0.2696 | 0.3383 | | 0.6178 | 20.41 | 15000 | 0.0690 | 0.2743 | 0.3417 | | 0.6097 | 21.09 | 15500 | 0.0671 | 0.2663 | 0.3352 | | 0.6245 | 21.77 | 16000 | 0.0665 | 0.2685 | 0.3318 | | 0.6137 | 22.45 | 16500 | 0.0655 | 0.2700 | 0.3262 | | 0.6018 | 23.13 | 17000 | 0.0652 | 0.2657 | 0.3225 | | 0.6063 | 23.81 | 17500 | 0.0663 | 0.2664 | 0.3276 | | 0.5917 | 24.49 | 18000 | 0.0658 | 0.2725 | 0.3264 | | 0.5984 | 25.17 | 18500 | 0.0643 | 0.2593 | 0.3197 | | 0.5949 | 25.85 | 19000 | 0.0635 | 0.2581 | 0.3161 | | 0.5863 | 26.53 | 19500 | 0.0639 | 0.2543 | 0.3196 | | 0.5858 | 27.21 | 20000 | 0.0628 | 0.2620 | 0.3136 | | 0.5902 | 27.89 | 20500 | 0.0627 | 0.2549 | 0.3157 | | 0.5794 | 28.57 | 21000 | 0.0624 | 0.2543 | 0.3136 | | 0.5744 | 29.25 | 21500 | 0.0620 | 0.2542 | 0.3091 | | 0.5899 | 29.93 | 22000 | 0.0624 | 0.2540 | 0.3122 | | 0.5597 | 30.61 | 22500 | 0.0609 | 0.2500 | 0.3057 | | 0.5595 | 31.29 | 23000 | 0.0616 | 0.2539 | 0.3087 | | 0.5664 | 31.97 | 23500 | 0.0610 | 0.2504 | 0.3070 | | 0.5608 | 32.65 | 24000 | 0.0611 | 0.2535 | 0.3066 | | 0.5557 | 33.33 | 24500 | 0.0608 | 0.2538 | 0.3047 | | 0.5741 | 34.01 | 25000 | 0.0596 | 0.2480 | 0.3009 | | 0.5614 | 34.69 | 25500 | 0.0601 | 0.2516 | 0.3033 | | 0.5436 | 35.37 | 26000 | 0.0601 | 0.2540 | 0.3004 | | 0.555 | 36.05 | 26500 | 0.0595 | 0.2486 | 0.2993 | | 0.5474 | 36.73 | 27000 | 0.0598 | 0.2536 | 0.3003 | | 0.5352 | 37.41 | 27500 | 0.0597 | 0.2589 | 0.2986 | | 0.5489 | 38.1 | 28000 | 0.0586 | 0.2485 | 0.2925 | | 0.5438 | 38.77 | 28500 | 0.0581 | 0.2500 | 0.2908 | | 0.541 | 39.46 | 29000 | 0.0577 | 0.2451 | 0.2879 | | 0.5462 | 40.14 | 29500 | 0.0581 | 0.2510 | 0.2935 | | 0.529 | 40.82 | 30000 | 0.0575 | 0.2435 | 0.2879 | | 0.5169 | 41.5 | 30500 | 0.0572 | 0.2474 | 0.2860 | | 0.5281 | 42.18 | 31000 | 0.0575 | 0.2478 | 0.2884 | | 0.527 | 42.86 | 31500 | 0.0568 | 0.2492 | 0.2845 | | 0.5172 | 43.54 | 32000 | 0.0575 | 0.2451 | 0.2885 | | 0.5154 | 44.22 | 32500 | 0.0574 | 0.2490 | 0.2873 | | 0.5129 | 44.9 | 33000 | 0.0569 | 0.2446 | 0.2853 | | 0.5075 | 45.58 | 33500 | 0.0565 | 0.2485 | 0.2828 | | 0.5077 | 46.26 | 34000 | 0.0559 | 0.2452 | 0.2807 | | 0.5004 | 46.94 | 34500 | 0.0572 | 0.2501 | 0.2882 | | 0.5319 | 47.62 | 35000 | 0.0575 | 0.2516 | 0.2856 | | 0.4956 | 48.3 | 35500 | 0.0567 | 0.2495 | 0.2821 | | 0.5053 | 48.98 | 36000 | 0.0565 | 0.2482 | 0.2825 | | 0.5014 | 49.66 | 36500 | 0.0559 | 0.2441 | 0.2808 | | 0.4945 | 50.34 | 37000 | 0.0562 | 0.2460 | 0.2807 | | 0.51 | 51.02 | 37500 | 0.0547 | 0.2434 | 0.2741 | | 0.5095 | 51.7 | 38000 | 0.0558 | 0.2434 | 0.2790 | | 0.5026 | 52.38 | 38500 | 0.0560 | 0.2478 | 0.2787 | | 0.5081 | 53.06 | 39000 | 0.0566 | 0.2485 | 0.2821 | | 0.5021 | 53.74 | 39500 | 0.0551 | 0.2410 | 0.2752 | | 0.4945 | 54.42 | 40000 | 0.0552 | 0.2436 | 0.2766 | | 0.4882 | 55.1 | 40500 | 0.0555 | 0.2438 | 0.2769 | | 0.497 | 55.78 | 41000 | 0.0550 | 0.2423 | 0.2758 | | 0.4925 | 56.46 | 41500 | 0.0560 | 0.2474 | 0.2790 | | 0.4894 | 57.14 | 42000 | 0.0559 | 0.2497 | 0.2797 | | 0.4767 | 57.82 | 42500 | 0.0556 | 0.2528 | 0.2800 | | 0.4796 | 58.5 | 43000 | 0.0549 | 0.2463 | 0.2755 | | 0.4767 | 59.18 | 43500 | 0.0548 | 0.2452 | 0.2753 | | 0.4786 | 59.86 | 44000 | 0.0551 | 0.2480 | 0.2769 | | 0.4804 | 60.54 | 44500 | 0.0556 | 0.2514 | 0.2789 | | 0.4794 | 61.22 | 45000 | 0.0539 | 0.2391 | 0.2715 | | 0.4789 | 61.9 | 45500 | 0.0546 | 0.2461 | 0.2725 | | 0.4683 | 62.58 | 46000 | 0.0541 | 0.2444 | 0.2707 | | 0.4721 | 63.27 | 46500 | 0.0539 | 0.2468 | 0.2693 | | 0.4792 | 63.94 | 47000 | 0.0546 | 0.2479 | 0.2738 | | 0.4712 | 64.63 | 47500 | 0.0547 | 0.2466 | 0.2742 | | 0.4607 | 65.31 | 48000 | 0.0539 | 0.2503 | 0.2707 | | 0.4712 | 65.99 | 48500 | 0.0543 | 0.2458 | 0.2718 | | 0.4647 | 66.67 | 49000 | 0.0538 | 0.2474 | 0.2693 | | 0.4736 | 67.35 | 49500 | 0.0541 | 0.2514 | 0.2696 | | 0.4718 | 68.03 | 50000 | 0.0540 | 0.2506 | 0.2692 | | 0.4695 | 68.71 | 50500 | 0.0538 | 0.2499 | 0.2675 | | 0.4549 | 69.39 | 51000 | 0.0534 | 0.2491 | 0.2669 | | 0.4605 | 70.07 | 51500 | 0.0532 | 0.2497 | 0.2660 | | 0.4538 | 70.75 | 52000 | 0.0536 | 0.2472 | 0.2684 | | 0.4571 | 71.43 | 52500 | 0.0523 | 0.2441 | 0.2629 | | 0.4608 | 72.11 | 53000 | 0.0529 | 0.2469 | 0.2652 | | 0.4541 | 72.79 | 53500 | 0.0533 | 0.2498 | 0.2673 | | 0.4424 | 73.47 | 54000 | 0.0530 | 0.2504 | 0.2658 | | 0.4482 | 74.15 | 54500 | 0.0534 | 0.2517 | 0.2684 | | 0.4554 | 74.83 | 55000 | 0.0529 | 0.2471 | 0.2656 | | 0.444 | 75.51 | 55500 | 0.0535 | 0.2493 | 0.2675 | | 0.4464 | 76.19 | 56000 | 0.0524 | 0.2461 | 0.2635 | | 0.4436 | 76.87 | 56500 | 0.0526 | 0.2479 | 0.2641 | | 0.4432 | 77.55 | 57000 | 0.0526 | 0.2513 | 0.2641 | | 0.4459 | 78.23 | 57500 | 0.0521 | 0.2460 | 0.2625 | | 0.4433 | 78.91 | 58000 | 0.0521 | 0.2457 | 0.2622 | | 0.4407 | 79.59 | 58500 | 0.0528 | 0.2531 | 0.2659 | | 0.4389 | 80.27 | 59000 | 0.0521 | 0.2485 | 0.2631 | | 0.4384 | 80.95 | 59500 | 0.0522 | 0.2502 | 0.2653 | | 0.4306 | 81.63 | 60000 | 0.0528 | 0.2480 | 0.2665 | | 0.4505 | 82.31 | 60500 | 0.0523 | 0.2461 | 0.2637 | | 0.4442 | 82.99 | 61000 | 0.0523 | 0.2519 | 0.2641 | | 0.4349 | 83.67 | 61500 | 0.0522 | 0.2509 | 0.2625 | | 0.4398 | 84.35 | 62000 | 0.0523 | 0.2510 | 0.2659 | | 0.4398 | 85.03 | 62500 | 0.0526 | 0.2507 | 0.2648 | | 0.4355 | 85.71 | 63000 | 0.0523 | 0.2500 | 0.2653 | | 0.4373 | 86.39 | 63500 | 0.0524 | 0.2523 | 0.2650 | | 0.4391 | 87.07 | 64000 | 0.0523 | 0.2509 | 0.2635 | | 0.4381 | 87.75 | 64500 | 0.0521 | 0.2502 | 0.2635 | | 0.4297 | 88.43 | 65000 | 0.0521 | 0.2521 | 0.2632 | | 0.44 | 89.12 | 65500 | 0.0520 | 0.2507 | 0.2624 | | 0.4313 | 89.8 | 66000 | 0.0519 | 0.2497 | 0.2623 | | 0.4402 | 90.48 | 66500 | 0.0517 | 0.2488 | 0.2608 | | 0.4324 | 91.16 | 67000 | 0.0512 | 0.2485 | 0.2585 | | 0.4317 | 91.84 | 67500 | 0.0513 | 0.2488 | 0.2587 | | 0.437 | 92.52 | 68000 | 0.0513 | 0.2473 | 0.2590 | | 0.4389 | 93.2 | 68500 | 0.0512 | 0.2472 | 0.2581 | | 0.4428 | 93.88 | 69000 | 0.0512 | 0.2475 | 0.2587 | | 0.4294 | 94.56 | 69500 | 0.0513 | 0.2489 | 0.2596 | | 0.4247 | 95.24 | 70000 | 0.0515 | 0.2499 | 0.2597 | | 0.4309 | 95.92 | 70500 | 0.0514 | 0.2493 | 0.2590 | | 0.4366 | 96.6 | 71000 | 0.0512 | 0.2492 | 0.2592 | | 0.4245 | 97.28 | 71500 | 0.0513 | 0.2493 | 0.2587 | | 0.4346 | 97.96 | 72000 | 0.0512 | 0.2478 | 0.2583 | | 0.4289 | 98.64 | 72500 | 0.0512 | 0.2489 | 0.2585 | | 0.4246 | 99.32 | 73000 | 0.0513 | 0.2487 | 0.2589 | | 0.4241 | 100.0 | 73500 | 0.0513 | 0.2491 | 0.2588 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.10.3 ### Credits Author: Shukrullo Turgunov (aka Vodiylik)
wooii/ppo-SnowballTarget
wooii
2023-10-04T11:30:49Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-10-04T11:11:56Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: wooii/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BornSaint/whisper-pt-jax
BornSaint
2023-10-04T11:27:40Z
5
1
transformers
[ "transformers", "jax", "whisper", "automatic-speech-recognition", "pt", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-06-28T23:27:23Z
--- license: apache-2.0 language: - pt --- pierreguillou/whisper-medium-portuguese conversed to faster use in TPU by using JAX (https://huggingface.co/pierreguillou/whisper-medium-portuguese)
Narmadat21/Llama-2-7b-chat-hf-fine-tuned-adapters
Narmadat21
2023-10-04T11:09:59Z
2
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-10-04T11:09:56Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0.dev0
awrysfab/human_action_classification
awrysfab
2023-10-04T10:50:11Z
193
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-10-04T10:26:27Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: human_action_classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # human_action_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3689 - Accuracy: 0.0728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3354 | 1.0 | 197 | 2.9994 | 0.0717 | | 0.9519 | 2.0 | 394 | 3.3635 | 0.0778 | | 0.8178 | 3.0 | 591 | 3.5103 | 0.0763 | | 0.7122 | 4.0 | 788 | 3.7261 | 0.0683 | | 0.7532 | 5.0 | 985 | 3.7279 | 0.0661 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
Siknote/cakevspavlova
Siknote
2023-10-04T10:47:45Z
0
0
null
[ "image-classification", "en", "license:apache-2.0", "region:us" ]
image-classification
2023-10-04T10:17:25Z
--- license: apache-2.0 language: - en pipeline_tag: image-classification ---
Tommert25/robbert0410_lrate10b32
Tommert25
2023-10-04T10:31:04Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:pdelobelle/robbert-v2-dutch-base", "base_model:finetune:pdelobelle/robbert-v2-dutch-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T10:23:13Z
--- license: mit base_model: pdelobelle/robbert-v2-dutch-base tags: - generated_from_trainer metrics: - recall - accuracy model-index: - name: robbert0410_lrate10b32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert0410_lrate10b32 This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4435 - Precisions: 0.8143 - Recall: 0.8300 - F-measure: 0.8201 - Accuracy: 0.9162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:| | 0.6187 | 1.0 | 118 | 0.3807 | 0.8761 | 0.6803 | 0.6943 | 0.8771 | | 0.3045 | 2.0 | 236 | 0.3297 | 0.7915 | 0.7331 | 0.7475 | 0.8966 | | 0.1748 | 3.0 | 354 | 0.3503 | 0.7831 | 0.7466 | 0.7553 | 0.9005 | | 0.1059 | 4.0 | 472 | 0.3670 | 0.8133 | 0.7784 | 0.7893 | 0.9086 | | 0.0649 | 5.0 | 590 | 0.3926 | 0.7875 | 0.7973 | 0.7908 | 0.9053 | | 0.0376 | 6.0 | 708 | 0.4213 | 0.7906 | 0.7922 | 0.7906 | 0.9082 | | 0.0221 | 7.0 | 826 | 0.4435 | 0.8143 | 0.8300 | 0.8201 | 0.9162 | | 0.014 | 8.0 | 944 | 0.4521 | 0.8170 | 0.8047 | 0.8090 | 0.9142 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
wooii/Reinforce-Pixelcopter-PLE-v0
wooii
2023-10-04T10:14:19Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T07:19:42Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 35.70 +/- 29.45 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
rezaFarsh/finetuning-sentiment-model-1900-samples-6-labels
rezaFarsh
2023-10-04T10:02:09Z
106
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-04T09:43:47Z
--- license: apache-2.0 base_model: bert-base-multilingual-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuning-sentiment-model-1900-samples-6-labels results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-1900-samples-6-labels This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1934 - Accuracy: 0.6667 - F1 Score: 0.6574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
duwi/ppo-Huggy
duwi
2023-10-04T10:00:59Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-10-04T10:00:53Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: duwi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Tommert25/robbert0410_lrate10b8
Tommert25
2023-10-04T09:54:51Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:pdelobelle/robbert-v2-dutch-base", "base_model:finetune:pdelobelle/robbert-v2-dutch-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T09:48:37Z
--- license: mit base_model: pdelobelle/robbert-v2-dutch-base tags: - generated_from_trainer metrics: - recall - accuracy model-index: - name: robbert0410_lrate10b8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert0410_lrate10b8 This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6067 - Precisions: 0.8082 - Recall: 0.7813 - F-measure: 0.7929 - Accuracy: 0.9106 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:| | 0.6356 | 1.0 | 471 | 0.4207 | 0.8357 | 0.6959 | 0.6907 | 0.8767 | | 0.3636 | 2.0 | 942 | 0.3759 | 0.7587 | 0.7486 | 0.7497 | 0.8938 | | 0.2131 | 3.0 | 1413 | 0.4114 | 0.8027 | 0.7381 | 0.7548 | 0.8966 | | 0.1356 | 4.0 | 1884 | 0.4721 | 0.8141 | 0.7498 | 0.7682 | 0.9015 | | 0.0768 | 5.0 | 2355 | 0.5470 | 0.7628 | 0.7637 | 0.7575 | 0.8969 | | 0.0459 | 6.0 | 2826 | 0.5884 | 0.7864 | 0.7783 | 0.7807 | 0.9109 | | 0.0267 | 7.0 | 3297 | 0.6067 | 0.8082 | 0.7813 | 0.7929 | 0.9106 | | 0.0183 | 8.0 | 3768 | 0.6205 | 0.7964 | 0.7684 | 0.7786 | 0.9090 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
Tommert25/robbert0410_lrate10b4
Tommert25
2023-10-04T09:45:20Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:pdelobelle/robbert-v2-dutch-base", "base_model:finetune:pdelobelle/robbert-v2-dutch-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T09:36:28Z
--- license: mit base_model: pdelobelle/robbert-v2-dutch-base tags: - generated_from_trainer metrics: - recall - accuracy model-index: - name: robbert0410_lrate10b4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert0410_lrate10b4 This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6818 - Precisions: 0.7943 - Recall: 0.7761 - F-measure: 0.7846 - Accuracy: 0.9080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:| | 0.6936 | 1.0 | 942 | 0.5273 | 0.8577 | 0.7062 | 0.7069 | 0.8731 | | 0.4407 | 2.0 | 1884 | 0.4780 | 0.7487 | 0.7080 | 0.7142 | 0.8898 | | 0.3023 | 3.0 | 2826 | 0.5526 | 0.7743 | 0.7209 | 0.7150 | 0.8904 | | 0.2057 | 4.0 | 3768 | 0.5627 | 0.7815 | 0.7405 | 0.7559 | 0.8998 | | 0.1333 | 5.0 | 4710 | 0.5509 | 0.7959 | 0.7521 | 0.7680 | 0.9010 | | 0.0896 | 6.0 | 5652 | 0.6215 | 0.7844 | 0.7583 | 0.7699 | 0.9053 | | 0.0538 | 7.0 | 6594 | 0.6694 | 0.7851 | 0.7723 | 0.7766 | 0.9025 | | 0.0316 | 8.0 | 7536 | 0.6818 | 0.7943 | 0.7761 | 0.7846 | 0.9080 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
davidpistori/myprovence
davidpistori
2023-10-04T09:44:23Z
0
0
keras
[ "keras", "fr", "dataset:davidpistori/myprovence", "license:apache-2.0", "region:us" ]
null
2023-10-04T09:39:36Z
--- license: apache-2.0 datasets: - davidpistori/myprovence language: - fr library_name: keras ---
SyedAunZaidi/bart-large-cnn-finetuned-samsum-lora
SyedAunZaidi
2023-10-04T09:43:40Z
3
0
peft
[ "peft", "generated_from_trainer", "dataset:samsum", "base_model:facebook/bart-large-cnn", "base_model:adapter:facebook/bart-large-cnn", "license:mit", "region:us" ]
null
2023-10-03T15:24:55Z
--- license: mit base_model: facebook/bart-large-cnn tags: - generated_from_trainer datasets: - samsum model-index: - name: bart-large-cnn-finetuned-samsum-lora results: [] library_name: peft --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-samsum-lora This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - PEFT 0.5.0 - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
Tommert25/robbert0410_lrate2.5b32
Tommert25
2023-10-04T09:32:38Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:pdelobelle/robbert-v2-dutch-base", "base_model:finetune:pdelobelle/robbert-v2-dutch-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T09:24:48Z
--- license: mit base_model: pdelobelle/robbert-v2-dutch-base tags: - generated_from_trainer metrics: - recall - accuracy model-index: - name: robbert0410_lrate2.5b32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert0410_lrate2.5b32 This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3700 - Precisions: 0.7911 - Recall: 0.7515 - F-measure: 0.7562 - Accuracy: 0.8997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:| | 0.8604 | 1.0 | 118 | 0.4897 | 0.8413 | 0.6506 | 0.6616 | 0.8535 | | 0.4417 | 2.0 | 236 | 0.4031 | 0.8327 | 0.6978 | 0.6910 | 0.8714 | | 0.3227 | 3.0 | 354 | 0.3781 | 0.8393 | 0.7154 | 0.7031 | 0.8841 | | 0.2638 | 4.0 | 472 | 0.3463 | 0.7238 | 0.7280 | 0.7254 | 0.8962 | | 0.2102 | 5.0 | 590 | 0.3579 | 0.7670 | 0.7343 | 0.7344 | 0.8955 | | 0.1795 | 6.0 | 708 | 0.3640 | 0.7839 | 0.7433 | 0.7408 | 0.8966 | | 0.1547 | 7.0 | 826 | 0.3659 | 0.7815 | 0.7454 | 0.7493 | 0.8995 | | 0.1401 | 8.0 | 944 | 0.3700 | 0.7911 | 0.7515 | 0.7562 | 0.8997 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
TheBloke/CodeFuse-13B-GPTQ
TheBloke
2023-10-04T09:28:39Z
22
4
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "base_model:codefuse-ai/CodeFuse-13B", "base_model:quantized:codefuse-ai/CodeFuse-13B", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2023-10-04T08:37:06Z
--- base_model: codefuse-ai/CodeFuse-13B inference: false license: other model_creator: CodeFuse AI model_name: Codefuse 13B model_type: gptneox prompt_template: '<|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|> ' quantized_by: TheBloke tasks: - code-generation --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Codefuse 13B - GPTQ - Model creator: [CodeFuse AI](https://huggingface.co/codefuse-ai) - Original model: [Codefuse 13B](https://huggingface.co/codefuse-ai/CodeFuse-13B) <!-- description start --> ## Description This repo contains GPTQ model files for [CodeFuse AI's Codefuse 13B](https://huggingface.co/codefuse-ai/CodeFuse-13B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeFuse-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ) * [CodeFuse AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codefuse-ai/CodeFuse-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeFuse ``` <|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|> ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 8.61 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 9.35 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 14.66 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 14.95 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 15.84 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 4096 | 8.86 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/CodeFuse-13B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/CodeFuse-13B-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `CodeFuse-13B-GPTQ`: ```shell mkdir CodeFuse-13B-GPTQ huggingface-cli download TheBloke/CodeFuse-13B-GPTQ --local-dir CodeFuse-13B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir CodeFuse-13B-GPTQ huggingface-cli download TheBloke/CodeFuse-13B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir CodeFuse-13B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir CodeFuse-13B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeFuse-13B-GPTQ --local-dir CodeFuse-13B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/CodeFuse-13B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/CodeFuse-13B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/CodeFuse-13B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `CodeFuse-13B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/CodeFuse-13B-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|> ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers optimum pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.4.2 pip3 install . ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/CodeFuse-13B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''<|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: CodeFuse AI's Codefuse 13B # Model Card for CodeFuse-13B ![logo](LOGO.png) [[中文]](#chinese) [[English]](#english) <a id="english"></a> ## Model Description CodeFuse-13B is a 13 billion parameter code generation model trained on the GPT-NeoX framework, capable of handling code sequences of up to 4096 characters. This model was pretrained on a dataset consisting of 1000B token code, Chinese, and English data, covering over 40 programming languages. To further enhance the effectiveness and quality of the generated code, the model was fine-tuned on the CodeFuse-Evol-instruction-66k dataset, enabling it to produce more accurate, efficient, and compliant code. Pass@1 achieved 37.1% on the HumanEval evaluation set(BeamSearch strategy, BeamSize=3). ## Code Community **Homepage**: 🏡 https://github.com/codefuse-ai (**Please give us your support with a Star🌟 + Fork🚀 + Watch👀**) + If you wish to fine-tune the model yourself, you can visit ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨ + If you wish to deploy the model yourself, you can visit ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨ + If you wish to see a demo of the model, you can visit ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨ ## Requirements * Python 3.8 or above. * PyTorch 1.12 or above, with a recommendation for 2.0 or above. * Transformers 4.24.0 or above. * It is advisable to use CUDA 11.4 or above (for GPU users and flash-attention users, this option should be considered). ## Quickstart ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(("CodeFuse-13B")) model = AutoModelForCausalLM.from_pretrained(("CodeFuse-13B"), device_map="auto").half().eval() input_ids = tokenizer.encode("# language: Python\ndef quick_sort(array):\n", return_tensors="pt").to("cuda") output_ids = model.generate(input_ids, max_new_tokens=200) print(tokenizer.decode(output_ids[0])) ``` ## MD5 We notice that the file may be corrupted during transfer process. Please check MD5 value before use. | Model File | MD5 Value | |:---------------------------------|:--------------------------------:| | pytorch_model-00001-of-00006.bin | b79e4ccc93c40fa6113aaf6a434473d5 | | pytorch_model-00002-of-00006.bin | 5a82f19e3f62c693e41fe627084c722b | | pytorch_model-00003-of-00006.bin | d4b53c391a353d0fc0a1be1c913d5f04 | | pytorch_model-00004-of-00006.bin | f9e3dcdea13ff02f4e3aad4f9db7a33f | | pytorch_model-00005-of-00006.bin | 698a8f2f05723a572193733bce12eb93 | | pytorch_model-00006-of-00006.bin | 312439d0b810f1bb81034fe094ff84c7 | <a id="chinese"></a> ## 简介 CodeFuse-13B是基于GPT-NeoX框架训练的13B参数代码生成模型,能够处理4096个字符的代码序列。该模型在1000B Token的代码、中文、英文数据数据集上进行预训练,覆盖超过40种编程语言。为了进一步提升生成代码的效果和质量,该模型还在CodeFuse-Evol-instruction-66k数据集上进行了微调,使得该模型能够生成更加准确、高效、符合要求的代码。在HumanEval评测集上Pass@1达到37.1%(采用BeamSearch解码,其中BeamSize=3)。 ## 代码社区 **大本营**: 🏡 https://github.com/codefuse-ai (**欢迎为我们的项目一键三连 Star🌟 + Fork🚀 + Watch👀**) + 如果您想自己微调该模型,可以访问 ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨ + 如果您想自己部署该模型,可以访问 ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨ + 如果您想观看该模型示例,可以访问 ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨ ## 要求 * python 3.8及以上版本 * pytorch 1.12及以上版本,推荐2.0及以上版本 * transformers 4.24.0及以上版本 * 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)。 ## 快速使用 ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(("CodeFuse-13B")) model = AutoModelForCausalLM.from_pretrained(("CodeFuse-13B"), device_map="auto").half().eval() input_ids = tokenizer.encode("# language: Python\ndef quick_sort(array):\n", return_tensors="pt").to("cuda") output_ids = model.generate(input_ids, max_new_tokens=200) print(tokenizer.decode(output_ids[0])) ``` ## MD5 我们发现模型文件可能会在传输过程中损坏,使用前请检查文件MD5值。 | 模型文件 | MD5值 | |:---------------------------------|:--------------------------------:| | pytorch_model-00001-of-00006.bin | b79e4ccc93c40fa6113aaf6a434473d5 | | pytorch_model-00002-of-00006.bin | 5a82f19e3f62c693e41fe627084c722b | | pytorch_model-00003-of-00006.bin | d4b53c391a353d0fc0a1be1c913d5f04 | | pytorch_model-00004-of-00006.bin | f9e3dcdea13ff02f4e3aad4f9db7a33f | | pytorch_model-00005-of-00006.bin | 698a8f2f05723a572193733bce12eb93 | | pytorch_model-00006-of-00006.bin | 312439d0b810f1bb81034fe094ff84c7 |
thu-ml/unidiffuser-v0
thu-ml
2023-10-04T09:27:38Z
11
9
diffusers
[ "diffusers", "text-to-image", "image-to-text", "image-captioning", "image-variation", "text-variation", "multi-modality", "generative model", "license:agpl-3.0", "diffusers:UniDiffuserPipeline", "region:us" ]
text-to-image
2023-03-12T02:10:09Z
--- license: agpl-3.0 tags: - text-to-image - image-to-text - image-captioning - image-variation - text-variation - multi-modality - generative model --- UniDiffuser is a unified diffusion framework to fit all distributions relevant to a set of multi-modal data in one transformer. UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. Specifically, UniDiffuser employs a variation of transformer, called [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves. We provide two versions of UniDiffuser: - [UniDiffuser-v0](https://huggingface.co/thu-ml/unidiffuser-v0): This version is trained on [LAION-5B](https://laion.ai/), which contains noisy webdata of text-image pairs. - [UniDiffuser-v1](https://huggingface.co/thu-ml/unidiffuser-v1): This version is resumed from UniDiffuser-v0, and is further trained with a set of less noisy internal text-image pairs. It uses a flag as its input to distinguish webdata and internal data during training. ## Download We provide files for UniDiffuser-v0 in [this link](https://huggingface.co/thu-ml/unidiffuser-v0/tree/main), and files for UniDiffuser-v1 in [this link](https://huggingface.co/thu-ml/unidiffuser-v1/tree/main). These files are: - `autoencoder_kl.pth` is the weight of the image autoencoder converted from [Stable Diffusion](https://github.com/CompVis/stable-diffusion). - `caption_decoder.pth` is the weight of the finetuned GPT-2 text decoder. - `uvit_v0.pth/uvit_v1.pth` is the weight of U-ViT for UniDiffuser-v0/UniDiffuser-v1. Note that UniDiffuser-v0 and UniDiffuser-v1 share the same `autoencoder_kl.pth` and `caption_decoder.pth`. You only need to download them once. As for other components, they will be automatically downloaded. The `diffusers` pipeline for UniDiffuser-v0 can be downloaded as follows: ```python from diffusers import UniDiffuserPipeline pipe = UniDiffuserPipeline.from_pretrained("thu-ml/unidiffuser-v0") ``` ## Usage Use the model with [UniDiffuser codebase](https://github.com/thu-ml/unidiffuser). Here is an example using UniDiffuser-v0 with `diffusers`: ```python import requests import torch from PIL import Image from io import BytesIO from diffusers import UniDiffuserPipeline device = "cuda" model_id_or_path = "thu-ml/unidiffuser-v0" pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) pipe.to(device) # Joint image-text generation. The generation task is automatically inferred. sample = pipe(num_inference_steps=20, guidance_scale=8.0) image = sample.images[0] text = sample.text[0] image.save("unidiffuser_sample_joint_image.png") print(text) # The mode can be set manually. The following is equivalent to the above: pipe.set_joint_mode() sample2 = pipe(num_inference_steps=20, guidance_scale=8.0) # Note that if you set the mode manually the pipeline will no longer attempt # to automatically infer the mode. You can re-enable this with reset_mode(). pipe.reset_mode() # Text-to-image generation. prompt = "an elephant under the sea" sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) t2i_image = sample.images[0] t2i_image.save("unidiffuser_sample_text2img_image.png") # Image-to-text generation. image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" response = requests.get(image_url) init_image = Image.open(BytesIO(response.content)).convert("RGB") init_image = init_image.resize((512, 512)) sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) i2t_text = sample.text[0] print(i2t_text) # Image variation can be performed with a image-to-text generation followed by a text-to-image generation: sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) final_image = sample.images[0] final_image.save("unidiffuser_image_variation_sample.png") # Text variation can be performed with a text-to-image generation followed by a image-to-text generation: sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) final_prompt = sample.text[0] print(final_prompt) ``` ## Model Details - **Model type:** Diffusion-based multi-modal generation model - **Language(s):** English - **License:** agpl-3.0 - **Model Description:** This is a model that can perform image, text, text-to-image, image-to-text, and image-text pair generation. Its main component is a [U-ViT](https://github.com/baofff/U-ViT), which parameterizes the joint noise prediction network. Other components perform as encoders and decoders of different modalities, including a pretrained image autoencoder from [Stable Diffusion](https://github.com/CompVis/stable-diffusion), a pretrained [image ViT-B/32 CLIP encoder](https://github.com/openai/CLIP), a pretrained [text ViT-L CLIP encoder](https://huggingface.co/openai/clip-vit-large-patch14), and a [GPT-2](https://github.com/openai/gpt-2) text decoder finetuned by ourselves. - **Resources for more information:** [GitHub Repository](https://github.com/thu-ml/unidiffuser), [Paper](). ## Direct Use _Note: Most of this section is taken from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), but applies in the same way to UniDiffuser_. The model should be used following the agpl-3.0 license. Possible usage includes - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
Tommert25/robbert0410_lrate2.5b4
Tommert25
2023-10-04T09:22:48Z
115
0
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "generated_from_trainer", "base_model:pdelobelle/robbert-v2-dutch-base", "base_model:finetune:pdelobelle/robbert-v2-dutch-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-04T08:59:27Z
--- license: mit base_model: pdelobelle/robbert-v2-dutch-base tags: - generated_from_trainer metrics: - recall - accuracy model-index: - name: robbert0410_lrate2.5b4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robbert0410_lrate2.5b4 This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6518 - Precisions: 0.8163 - Recall: 0.7936 - F-measure: 0.8017 - Accuracy: 0.9116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:| | 0.62 | 1.0 | 942 | 0.4300 | 0.8644 | 0.6922 | 0.7030 | 0.8830 | | 0.3475 | 2.0 | 1884 | 0.4044 | 0.8222 | 0.7322 | 0.7464 | 0.8970 | | 0.2227 | 3.0 | 2826 | 0.4658 | 0.7715 | 0.7573 | 0.7476 | 0.9070 | | 0.1488 | 4.0 | 3768 | 0.5292 | 0.8193 | 0.7461 | 0.7655 | 0.9045 | | 0.0983 | 5.0 | 4710 | 0.5855 | 0.7938 | 0.7749 | 0.7829 | 0.9049 | | 0.0652 | 6.0 | 5652 | 0.6155 | 0.8170 | 0.7826 | 0.7976 | 0.9100 | | 0.0419 | 7.0 | 6594 | 0.6306 | 0.8072 | 0.7929 | 0.7971 | 0.9123 | | 0.032 | 8.0 | 7536 | 0.6518 | 0.8163 | 0.7936 | 0.8017 | 0.9116 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
ahyar002/emotion_classification
ahyar002
2023-10-04T09:20:49Z
6
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-17T13:55:35Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: emotion_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.53125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.2445 - Accuracy: 0.5312 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 10 | 1.9385 | 0.325 | | No log | 2.0 | 20 | 1.7153 | 0.4188 | | No log | 3.0 | 30 | 1.5905 | 0.3937 | | No log | 4.0 | 40 | 1.4706 | 0.4625 | | No log | 5.0 | 50 | 1.4078 | 0.5062 | | No log | 6.0 | 60 | 1.3739 | 0.4813 | | No log | 7.0 | 70 | 1.3108 | 0.5125 | | No log | 8.0 | 80 | 1.2874 | 0.5312 | | No log | 9.0 | 90 | 1.2810 | 0.5312 | | No log | 10.0 | 100 | 1.2754 | 0.5437 | | No log | 11.0 | 110 | 1.2380 | 0.5563 | | No log | 12.0 | 120 | 1.1721 | 0.6125 | | No log | 13.0 | 130 | 1.2242 | 0.5875 | | No log | 14.0 | 140 | 1.2530 | 0.525 | | No log | 15.0 | 150 | 1.2610 | 0.575 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
amira-morsli/my_awesome_asr_mind_model_g
amira-morsli
2023-10-04T09:04:56Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-09-28T20:58:09Z
--- tags: - generated_from_trainer metrics: - wer model-index: - name: my_awesome_asr_mind_model_g results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_asr_mind_model_g This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5919 - Wer: 0.8075 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 4.5631 | 200.0 | 1000 | 11.4025 | 0.9608 | | 3.0542 | 400.0 | 2000 | 12.4850 | 0.9657 | | 2.304 | 600.0 | 3000 | 13.9588 | 0.9755 | | 1.6633 | 800.0 | 4000 | 14.9246 | 1.0049 | | 1.1861 | 1000.0 | 5000 | 15.8909 | 1.0392 | | 0.8167 | 1200.0 | 6000 | 16.8013 | 1.1127 | | 0.5285 | 1400.0 | 7000 | 17.8491 | 1.1667 | | 0.3458 | 1600.0 | 8000 | 18.9104 | 1.2304 | | 0.2389 | 1800.0 | 9000 | 19.8274 | 1.2549 | | 0.1797 | 2000.0 | 10000 | 20.9407 | 1.3137 | | 0.1399 | 2200.0 | 11000 | 21.9922 | 1.2990 | | 0.1247 | 2400.0 | 12000 | 22.2851 | 1.2892 | | 0.1057 | 2600.0 | 13000 | 23.1505 | 1.2843 | | 0.0941 | 2800.0 | 14000 | 23.6575 | 1.3088 | | 0.0806 | 3000.0 | 15000 | 24.0379 | 1.3137 | | 0.0786 | 3200.0 | 16000 | 24.4104 | 1.3186 | | 0.0756 | 3400.0 | 17000 | 24.6755 | 1.3039 | | 0.0726 | 3600.0 | 18000 | 24.9217 | 1.3284 | | 0.0703 | 3800.0 | 19000 | 25.0183 | 1.3676 | | 0.0691 | 4000.0 | 20000 | 25.0077 | 1.3578 | | 0.1578 | 4200.0 | 21000 | 1.6879 | 0.9057 | | 0.1069 | 4400.0 | 22000 | 1.7429 | 0.9198 | | 0.0904 | 4600.0 | 23000 | 1.7689 | 0.9104 | | 0.0789 | 4800.0 | 24000 | 1.7784 | 0.8821 | | 0.0722 | 5000.0 | 25000 | 1.8151 | 0.9104 | | 0.0683 | 5200.0 | 26000 | 1.9367 | 0.9528 | | 0.0605 | 5400.0 | 27000 | 1.8784 | 0.9198 | | 0.0827 | 5600.0 | 28000 | 0.5633 | 0.7746 | | 0.0684 | 5800.0 | 29000 | 0.5884 | 0.7981 | | 0.0625 | 6000.0 | 30000 | 0.5694 | 0.7981 | | 0.0589 | 6200.0 | 31000 | 0.5863 | 0.7934 | | 0.0552 | 6400.0 | 32000 | 0.5806 | 0.7840 | | 0.0524 | 6600.0 | 33000 | 0.5765 | 0.7981 | | 0.0513 | 6800.0 | 34000 | 0.5865 | 0.7840 | | 0.0483 | 7000.0 | 35000 | 0.5980 | 0.7934 | | 0.0471 | 7200.0 | 36000 | 0.5889 | 0.7981 | | 0.0461 | 7400.0 | 37000 | 0.5821 | 0.8028 | | 0.0444 | 7600.0 | 38000 | 0.5915 | 0.7981 | | 0.0455 | 7800.0 | 39000 | 0.5960 | 0.8028 | | 0.0451 | 8000.0 | 40000 | 0.5919 | 0.8075 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
petergriger/taxi_qlearning
petergriger
2023-10-04T09:02:44Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-10-04T09:02:42Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi_qlearning results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="petergriger/taxi_qlearning", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Jamessjunk/FumoYukkuri
Jamessjunk
2023-10-04T08:48:36Z
0
0
null
[ "license:other", "region:us" ]
null
2023-10-04T08:47:40Z
--- license: other license_name: other license_link: LICENSE ---
flyover19/10032023
flyover19
2023-10-04T08:28:00Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "custom_code", "base_model:bigcode/santacoder", "base_model:finetune:bigcode/santacoder", "license:bigcode-openrail-m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-10-03T22:12:03Z
--- license: bigcode-openrail-m base_model: bigcode/santacoder tags: - generated_from_trainer model-index: - name: '10032023' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 10032023 This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6282 | 0.05 | 200 | 0.4105 | | 1.7635 | 0.1 | 400 | 0.5228 | | 1.7029 | 0.15 | 600 | 0.8193 | | 1.6817 | 0.2 | 800 | 1.6320 | | 1.6822 | 0.25 | 1000 | 2.8463 | | 1.671 | 0.3 | 1200 | 3.4860 | | 1.6698 | 0.35 | 1400 | 4.1775 | | 1.6631 | 0.4 | 1600 | 5.2973 | | 1.663 | 0.45 | 1800 | 5.8655 | | 1.6599 | 0.5 | 2000 | 5.8967 | | 1.6595 | 0.55 | 2200 | 0.2873 | | 1.6586 | 0.6 | 2400 | 0.3041 | | 1.6564 | 0.65 | 2600 | 0.3210 | | 1.658 | 0.7 | 2800 | 0.3262 | | 1.6549 | 0.75 | 3000 | 0.3136 | | 1.6498 | 0.8 | 3200 | 0.3232 | | 1.6462 | 0.85 | 3400 | 0.3195 | | 1.6454 | 0.9 | 3600 | 0.3216 | | 0.2173 | 0.95 | 3800 | 0.2726 | | 1.6619 | 1.0 | 4000 | 0.2642 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3
asmaa1/videomae-base-finetuned-SLT-subset
asmaa1
2023-10-04T08:23:43Z
7
0
transformers
[ "transformers", "pytorch", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-09-26T01:15:19Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-SLT-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-SLT-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4062 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 944 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.8603 | 0.06 | 59 | 3.7147 | 0.025 | | 3.7747 | 1.06 | 118 | 3.6984 | 0.05 | | 3.83 | 2.06 | 177 | 3.5418 | 0.075 | | 3.5065 | 3.06 | 236 | 3.3917 | 0.075 | | 3.6541 | 4.06 | 295 | 3.3558 | 0.1 | | 3.5419 | 5.06 | 354 | 3.2460 | 0.15 | | 3.2664 | 6.06 | 413 | 3.0603 | 0.2 | | 3.2295 | 7.06 | 472 | 2.7967 | 0.425 | | 2.829 | 8.06 | 531 | 2.3743 | 0.625 | | 2.5769 | 9.06 | 590 | 1.9349 | 0.675 | | 2.0383 | 10.06 | 649 | 1.3413 | 0.875 | | 1.8328 | 11.06 | 708 | 0.9239 | 0.925 | | 1.0447 | 12.06 | 767 | 0.6702 | 0.975 | | 0.8312 | 13.06 | 826 | 0.5127 | 1.0 | | 0.7203 | 14.06 | 885 | 0.4366 | 1.0 | | 0.6204 | 15.06 | 944 | 0.4062 | 1.0 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
MochaPixel/Chelsie
MochaPixel
2023-10-04T08:23:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-08-07T06:31:13Z
--- license: creativeml-openrail-m ---
ssarae/dreambooth_kuromi
ssarae
2023-10-04T08:07:49Z
0
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-10-03T09:11:31Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of znfhal kuromi tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - ssarae/dreambooth_kuromi These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of znfhal kuromi using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False.
Bharatmaryada/llama2-qlora-finetunined-french
Bharatmaryada
2023-10-04T08:06:52Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:TinyPixel/Llama-2-7B-bf16-sharded", "base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded", "region:us" ]
null
2023-07-21T06:51:56Z
--- library_name: peft base_model: TinyPixel/Llama-2-7B-bf16-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.0.dev0
reneknaebel/de_dep_hdt_lg
reneknaebel
2023-10-04T07:59:55Z
1
0
spacy
[ "spacy", "token-classification", "de", "model-index", "region:us" ]
token-classification
2022-07-04T13:48:51Z
--- tags: - spacy - token-classification language: - de model-index: - name: de_dep_hdt_lg results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.0 - name: NER Recall type: recall value: 0.0 - name: NER F Score type: f_score value: 0.0 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.9761626479 - task: name: POS type: token-classification metrics: - name: POS (UPOS) Accuracy type: accuracy value: 0.9803122687 - task: name: MORPH type: token-classification metrics: - name: Morph (UFeats) Accuracy type: accuracy value: 0.8550889023 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.9440152238 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.951406886 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.9313120076 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9611378532 ---