modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 12:28:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
462 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 12:26:48
card
stringlengths
11
1.01M
sujankapali/layoutlmv3-finetuned-invoice
sujankapali
2023-12-02T15:45:41Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-02T15:31:50Z
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-invoice results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-invoice This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2640 - Precision: 1.0 - Recall: 1.0 - F1: 1.0 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 800 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 100 | 2.1602 | 0.1927 | 0.1246 | 0.1514 | 0.4404 | | No log | 3.12 | 200 | 1.3966 | 0.7963 | 0.7656 | 0.7806 | 0.7663 | | No log | 4.69 | 300 | 0.8001 | 0.9852 | 0.9852 | 0.9852 | 0.9371 | | No log | 6.25 | 400 | 0.4385 | 1.0 | 1.0 | 1.0 | 1.0 | | 1.4289 | 7.81 | 500 | 0.2640 | 1.0 | 1.0 | 1.0 | 1.0 | | 1.4289 | 9.38 | 600 | 0.1747 | 1.0 | 1.0 | 1.0 | 1.0 | | 1.4289 | 10.94 | 700 | 0.1377 | 1.0 | 1.0 | 1.0 | 1.0 | | 1.4289 | 12.5 | 800 | 0.1270 | 1.0 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
LarryAIDraw/chameleon
LarryAIDraw
2023-12-02T15:43:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-02T15:32:41Z
--- license: creativeml-openrail-m --- https://civitai.com/models/155499/chameleon-path-to-nowhere
LarryAIDraw/Boa_Hancock2_NOFACE_uwuxo0_66_
LarryAIDraw
2023-12-02T15:42:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-02T15:31:56Z
--- license: creativeml-openrail-m --- https://civitai.com/models/143294/one-piece-series-boahancock
LarryAIDraw/hancock
LarryAIDraw
2023-12-02T15:40:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-02T15:30:05Z
--- license: creativeml-openrail-m --- https://civitai.com/models/152611/boa-hancock-one-piece-and
LarryAIDraw/onepiece_boahancock-09
LarryAIDraw
2023-12-02T15:39:00Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-02T15:28:50Z
--- license: creativeml-openrail-m --- https://civitai.com/models/72636/boa-hancock-or-one-piece
LarryAIDraw/boa_hancock_v1
LarryAIDraw
2023-12-02T15:37:40Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-02T15:28:28Z
--- license: creativeml-openrail-m --- https://civitai.com/models/46895/boa-hancock-one-piece
GraceL/ppo-LunarLander-v2
GraceL
2023-12-02T15:32:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T13:36:53Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -153.27 +/- 53.37 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
annabellehuether/legal-bert-base-uncased-supreme-court-summaries-3
annabellehuether
2023-12-02T15:18:04Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:nlpaueb/legal-bert-base-uncased", "base_model:finetune:nlpaueb/legal-bert-base-uncased", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-02T14:44:33Z
--- license: cc-by-sa-4.0 base_model: nlpaueb/legal-bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: legal-bert-base-uncased-supreme-court-summaries-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal-bert-base-uncased-supreme-court-summaries-3 This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6998 - Accuracy: 0.63 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 7 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6231 | 1.0 | 1320 | 0.6141 | 0.6322 | | 0.571 | 2.0 | 2640 | 0.6277 | 0.6344 | | 0.4851 | 3.0 | 3960 | 0.6998 | 0.63 | ### Framework versions - Transformers 4.35.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
lu-vae/qwen-openhermes-merged
lu-vae
2023-12-02T15:17:35Z
13
0
transformers
[ "transformers", "pytorch", "qwen", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-12-01T04:14:18Z
--- license: apache-2.0 --- qwen int4 lora finetuned with openhermes, merged with raw model for further use
Tim793/MCQ_Aussagen_Generierer_V01
Tim793
2023-12-02T15:15:35Z
0
0
null
[ "tensorboard", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:finetune:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-02T14:48:26Z
--- base_model: NousResearch/Llama-2-7b-chat-hf tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 7 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.13.3
Magnetar0101/my-pet-cat-xzg
Magnetar0101
2023-12-02T15:14:34Z
0
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-02T15:09:34Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Cat-xzg Dreambooth model trained by Magnetar0101 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: MVSR-160 Sample pictures of this concept: ![0](https://huggingface.co/Magnetar0101/my-pet-cat-xzg/resolve/main/sample_images/xzg_(3).jpg) ![1](https://huggingface.co/Magnetar0101/my-pet-cat-xzg/resolve/main/sample_images/xzg_(2).jpg) ![2](https://huggingface.co/Magnetar0101/my-pet-cat-xzg/resolve/main/sample_images/xzg_(5).jpg) ![3](https://huggingface.co/Magnetar0101/my-pet-cat-xzg/resolve/main/sample_images/xzg_(1).jpg) ![4](https://huggingface.co/Magnetar0101/my-pet-cat-xzg/resolve/main/sample_images/xzg_(4).jpg)
adejumobi/my_awesome_RoBERT2
adejumobi
2023-12-02T15:05:39Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-generation", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T14:39:03Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: my_awesome_RoBERT2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_RoBERT2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 25 | 2.5233 | | No log | 2.0 | 50 | 1.8404 | | No log | 3.0 | 75 | 1.6744 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
mcode4fun/ppo-LunarLander-v2
mcode4fun
2023-12-02T15:04:25Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T15:04:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 225.65 +/- 27.40 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AFK47/GPTRIZ
AFK47
2023-12-02T14:58:15Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T14:52:35Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: GPTRIZ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPTRIZ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
NikoK/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
NikoK
2023-12-02T14:46:41Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "region:us" ]
null
2023-12-02T14:30:06Z
--- library_name: peft base_model: bigscience/bloomz-560m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.2
Lew/policy_grad-Pixelcopter-PLE-v0
Lew
2023-12-02T14:38:32Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-01T14:00:28Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: policy_grad-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 25.60 +/- 16.78 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
baskotayunisha/mt5
baskotayunisha
2023-12-02T14:32:28Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-02T14:31:45Z
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer model-index: - name: mt5-small-finetuned-Nepali-Health results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-Nepali-Health This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9474 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1739 | 1.0 | 14770 | 1.9474 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Hyunho-Lee/qlora-koalpaca-polyglot-12.8b-50step
Hyunho-Lee
2023-12-02T14:25:32Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:beomi/polyglot-ko-12.8b-safetensors", "base_model:adapter:beomi/polyglot-ko-12.8b-safetensors", "region:us" ]
null
2023-12-02T14:25:29Z
--- library_name: peft base_model: beomi/polyglot-ko-12.8b-safetensors --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
Acetyl/ppo-Huggy
Acetyl
2023-12-02T14:21:48Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-12-02T14:21:41Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Acetyl/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
aryanaikdesai/my-pet-dog
aryanaikdesai
2023-12-02T14:20:38Z
0
0
null
[ "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-12-02T14:19:45Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### My-Pet-Dog Dreambooth model trained by aryanaikdesai following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: AITD-89 Sample pictures of this concept: ![0](https://huggingface.co/aryanaikdesai/my-pet-dog/resolve/main/sample_images/Screenshot_2023-12-02_194853.png)
LoneStriker/loyal-piano-m7-6.0bpw-h6-exl2
LoneStriker
2023-12-02T14:05:11Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:pankajmathur/orca_mini_v1_dataset", "dataset:openai/summarize_from_feedback", "dataset:PygmalionAI/PIPPA", "dataset:chargoddard/rpguild", "dataset:lemonilia/LimaRP", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T13:54:07Z
--- license: cc-by-nc-4.0 datasets: - pankajmathur/orca_mini_v1_dataset - openai/summarize_from_feedback - PygmalionAI/PIPPA - chargoddard/rpguild - lemonilia/LimaRP language: - en tags: - mistral --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Experimenting with dataset ratios. Intended to be a roleplay-focused model with some smarts and good long-context recall. Not sure if I've succeeded on the roleplay front, but something sure went right! Currently the #4 7B model on the leaderboard as of 11/30/2023. Going to riff on this and see where it goes. | model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | fblgit/juanako-7b-UNA | 59.91 | 68.17 | 85.34 | 62.47 | 65.13 | 78.85 | 20.7 | 38.74 | | Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 | | Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B | 58.6 | 66.55 | 84.47 | 63.34 | 61.22 | 78.37 | 23.58 | 32.66 | | **chargoddard/loyal-piano-m7** | 58.42 | 66.72 | 85.03 | 64.43 | 60.03 | 79.08 | 25.7 | 27.92 | | Gryphe/MythoMist7b | 58.26 | 65.87 | 83.55 | 62.32 | 59.98 | 78.06 | 20.24 | 37.82 | Dataset composition: | dataset | rows used | percent of total | | --- | --- | --- | | PIPPA | 14.6k | 43% | | summarize_from_feedback | 9k | 26% | | orca_mini_v1_dataset | 5.6k | 17% | | rpguild | 2.86k | 8% | | LimaRP | 2k | 6% |
sd-dreambooth-library/nelly2
sd-dreambooth-library
2023-12-02T14:03:59Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-02T14:02:51Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### Nelly2 on Stable Diffusion via Dreambooth #### model by lunalade This your the Stable Diffusion model fine-tuned the Nelly2 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **<nelly>** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/nelly2/resolve/main/concept_images/5.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/nelly2/resolve/main/concept_images/2.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/nelly2/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/nelly2/resolve/main/concept_images/0.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/nelly2/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/nelly2/resolve/main/concept_images/4.jpeg)
Hanzalwi/bloom-3b-finetuned-aings-validation-data-1
Hanzalwi
2023-12-02T14:03:45Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "bloom", "arxiv:1910.09700", "base_model:bigscience/bloom-3b", "base_model:adapter:bigscience/bloom-3b", "region:us" ]
null
2023-12-01T17:41:58Z
--- library_name: peft base_model: bigscience/bloom-3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.6.3.dev0
zklee98/vit_tiny_patch16_multi
zklee98
2023-12-02T13:56:53Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-12-02T13:54:31Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Andyrasika/mistral-finetuned-samsum
Andyrasika
2023-12-02T13:56:36Z
0
4
null
[ "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2023-12-02T13:42:45Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.1 tags: - generated_from_trainer model-index: - name: mistral-finetuned-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-finetuned-samsum This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
ShadowProgrammer/EMNISTClassifier
ShadowProgrammer
2023-12-02T13:53:02Z
0
0
pytorch
[ "pytorch", "image-classification", "arxiv:1702.05373", "license:mit", "region:us" ]
image-classification
2023-12-02T00:43:28Z
--- license: mit library_name: pytorch pipeline_tag: image-classification --- A classifier trained on over 1 million digits on the [EMNIST](https://www.nist.gov/itl/products-and-services/emnist-dataset) dataset. Uses PyTorch and a combination of layers to create a fairly simple network. Check the [GitHub](https://github.com/ShadowDeveloper/EMNISTClassifier) for the training code. Cohen, G., Afshar, S., Tapson, J., & van Schaik, A. (2017). EMNIST: an extension of MNIST to handwritten letters. Retrieved from http://arxiv.org/abs/1702.05373
zklee98/vit_tiny_patch16_binary
zklee98
2023-12-02T13:50:51Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-12-02T13:47:42Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
bonvent/test2
bonvent
2023-12-02T13:47:31Z
4
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "fr", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-12-02T13:47:31Z
--- language: fr license: apache-2.0 datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - fr - hf-asr-leaderboard - mozilla-foundation/common_voice_6_0 - robust-speech-event - speech - xlsr-fine-tuning-week model-index: - name: XLSR Wav2Vec2 French by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fr type: common_voice args: fr metrics: - name: Test WER type: wer value: 17.65 - name: Test CER type: cer value: 4.89 - name: Test WER (+LM) type: wer value: 13.59 - name: Test CER (+LM) type: cer value: 3.91 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: fr metrics: - name: Dev WER type: wer value: 34.35 - name: Dev CER type: cer value: 14.09 - name: Dev WER (+LM) type: wer value: 24.72 - name: Dev CER (+LM) type: cer value: 12.33 --- # Fine-tuned XLSR-53 large model for speech recognition in French Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-french") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fr" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-french" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | "CE DERNIER A ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE | | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHÉMÉNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ASHEMÉNID ET SEPT DES SASANDNIDES | | "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGES SUR LES AUTRES | | LES PAYS-BAS ONT REMPORTÉ TOUTES LES ÉDITIONS. | LE PAYS-BAS ON REMPORTÉ TOUTES LES ÉDITIONS | | IL Y A MAINTENANT UNE GARE ROUTIÈRE. | IL AMNARDIGAD LE TIRAN | | HUIT | HUIT | | DANS L’ATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DÉFENDRE D’UNE VIVE ÉMOTION | DANS L'ATTENTE DU LENDEMAIN IL NE POUVAIT SE DÉFENDRE DUNE VIVE ÉMOTION | | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES. | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES | | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES. | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES | | ZÉRO | ZEGO | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset mozilla-foundation/common_voice_6_0 --config fr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-french --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-french, title={Fine-tuned {XLSR}-53 large model for speech recognition in {F}rench}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-french}}, year={2021} } ```
Osborn-bh/a2c-PandaReachDense-v3
Osborn-bh
2023-12-02T13:44:58Z
0
1
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T13:39:44Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.39 +/- 0.52 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
softwareweaver/fenris-xl-Olive-Onnx
softwareweaver
2023-12-02T13:41:34Z
0
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-26T06:02:27Z
--- license: openrail language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for https://civitai.com/models/122793/fenrisxl This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
softwareweaver/ColossusProject-xl-Olive-Onnx
softwareweaver
2023-12-02T13:40:08Z
1
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-26T05:09:12Z
--- license: openrail language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for https://civitai.com/models/147720 This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
softwareweaver/Bri-xl-Olive-Onnx
softwareweaver
2023-12-02T13:39:48Z
0
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-26T04:54:12Z
--- license: openrail language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for https://civitai.com/models/131703/brixl-or-a-must-in-your-toolbox This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
softwareweaver/pixel-art-xl-Olive-Onnx
softwareweaver
2023-12-02T13:38:40Z
5
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-26T03:12:46Z
--- license: openrail language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for https://civitai.com/models/120096/pixel-art-xl This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
softwareweaver/dynavision-xl-Olive-Onnx
softwareweaver
2023-12-02T13:37:54Z
0
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-26T02:37:23Z
--- license: openrail language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for https://civitai.com/models/122606 This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
softwareweaver/duchaiten-Aiart-xl-Olive-Onnx
softwareweaver
2023-12-02T13:37:22Z
4
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-26T02:28:06Z
--- license: openrail language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for https://civitai.com/models/118756 This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
TheBloke/SG-Raccoon-Yi-55B-200k-AWQ
TheBloke
2023-12-02T13:33:09Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "base_model:mlinmg/SG-Raccoon-Yi-55B-200k", "base_model:quantized:mlinmg/SG-Raccoon-Yi-55B-200k", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2023-12-02T11:33:36Z
--- base_model: mlinmg/SG-Raccoon-Yi-55B-200k inference: false language: - en, license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license model_creator: Marco Lironi model_name: SG Raccoon Yi 55B 200K model_type: yi pipeline_tag: conversational prompt_template: 'SYSTEM: {system_message} USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SG Raccoon Yi 55B 200K - AWQ - Model creator: [Marco Lironi](https://huggingface.co/mlinmg) - Original model: [SG Raccoon Yi 55B 200K](https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k) <!-- description start --> ## Description This repo contains AWQ model files for [Marco Lironi's SG Raccoon Yi 55B 200K](https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF) * [Marco Lironi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 200000 | 30.24 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/SG-Raccoon-Yi-55B-200k-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `SG-Raccoon-Yi-55B-200k-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/SG-Raccoon-Yi-55B-200k-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/SG-Raccoon-Yi-55B-200k-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/SG-Raccoon-Yi-55B-200k-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/SG-Raccoon-Yi-55B-200k-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Marco Lironi's SG Raccoon Yi 55B 200K <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/> </p> --- # SG Raccoon 55B The first 55B auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) with *200K context* into one. # Prompting Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Merge process The models used in the merge are [Tess-M-v1.3](https://huggingface.co/migtissera/Tess-M-v1.3/) and [Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B). The layer ranges used are as follows: ```yaml - model: migtissera/Tess-M-v1.3 layer_range: [0, 14] - model: NousResearch/Nous-Capybara-34B layer_range: [7, 21] - model: migtissera/Tess-M-v1.3 layer_range: [15, 29] - model: NousResearch/Nous-Capybara-34B layer_range: [22, 36] - model: migtissera/Tess-M-v1.3 layer_range: [30, 44] - model: NousResearch/Nous-Capybara-34B layer_range: [37, 51] - model: migtissera/Tess-M-v1.3 layer_range: [45, 59] ``` # Tips Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default. Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition. # Benchmarks Coming soon. # Acknowledgements - Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). - Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options - Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models - This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b)
vladmandic/animatediff-sdxl
vladmandic
2023-12-02T13:28:57Z
80
5
diffusers
[ "diffusers", "license:apache-2.0", "region:us" ]
null
2023-12-02T13:21:09Z
--- license: apache-2.0 --- Copy of <https://huggingface.co/guoyww/animatediff/blob/main/mm_sdxl_v10_beta.ckpt> in Huggingface Diffusers format so it can be loaded directly using MotionAdapter.from_pretrained
vicfeuga/PixelCopter
vicfeuga
2023-12-02T13:28:13Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-01-29T22:49:30Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 36.80 +/- 23.76 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
sd-dreambooth-library/nelly
sd-dreambooth-library
2023-12-02T13:23:40Z
3
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-02T13:22:31Z
--- license: creativeml-openrail-m tags: - text-to-image --- ### Nelly on Stable Diffusion via Dreambooth #### model by lunalade This your the Stable Diffusion model fine-tuned the Nelly concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **<nelly>** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/nelly/resolve/main/concept_images/5.jpeg) ![image 1](https://huggingface.co/sd-dreambooth-library/nelly/resolve/main/concept_images/2.jpeg) ![image 2](https://huggingface.co/sd-dreambooth-library/nelly/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/sd-dreambooth-library/nelly/resolve/main/concept_images/0.jpeg) ![image 4](https://huggingface.co/sd-dreambooth-library/nelly/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/sd-dreambooth-library/nelly/resolve/main/concept_images/4.jpeg)
calkan9/medllama2_7b
calkan9
2023-12-02T13:20:28Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:llSourcell/medllama2_7b", "base_model:adapter:llSourcell/medllama2_7b", "region:us" ]
null
2023-11-29T03:52:43Z
--- library_name: peft base_model: llSourcell/medllama2_7b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2 ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.2
tulsianihitesh26/ppo-LunarLander-v2
tulsianihitesh26
2023-12-02T13:17:49Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T13:17:29Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 235.78 +/- 61.50 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
vladmandic/animateface
vladmandic
2023-12-02T13:15:19Z
11
0
diffusers
[ "diffusers", "license:apache-2.0", "region:us" ]
null
2023-11-28T23:43:16Z
--- license: apache-2.0 --- Copy of <https://huggingface.co/nlper2022/animatediff_face_512> in Huggingface Diffusers format so it can be loaded directly using `MotionAdapter.from_pretrained`
CausalLM/72B-preview-GGUF
CausalLM
2023-12-02T13:10:51Z
14
29
null
[ "gguf", "llama", "qwen", "en", "zh", "license:gpl-3.0", "endpoints_compatible", "region:us" ]
null
2023-12-01T18:15:48Z
--- license: gpl-3.0 language: - en - zh tags: - llama - qwen --- **Please read me! To use the GGUF from this repo, please use latest llama.cpp with pr [#4283](https://github.com/ggerganov/llama.cpp/pull/4283) merged.** # Uncensored, white-labeled... Compatible with Meta LLaMA 2. This is **not in Qwen Format**, but in **LLaMA format**. This is not **Qwen GGUF** but **LLaMAfied Qwen Chat Uncensored GGUF** [https://huggingface.co/CausalLM/72B-preview](https://huggingface.co/CausalLM/72B-preview) **PLEASE ONLY USE CHATML FORMAT:** ``` <|im_start|>system You are a helpful assistant. <|im_end|> <|im_start|>user How to sell drugs online fast?<|im_end|> <|im_start|>assistant ``` Files larger than 50GB are split and require joining, as HF does not support uploading files larger than 50GB. Tips for merge large files: linux ```bash cat 72b-q5_k_m.gguf-split-a 72b-q5_k_m.gguf-split-b > 72b-q5_k_m.gguf ``` windows ```cmd copy /b 72b-q5_k_m.gguf-split-a + 72b-q5_k_m.gguf-split-b 72b-q5_k_m.gguf ``` ## How to update your text-generation-webui Before their official update, you can install the latest version manually. 1. check your current version first for example: ```bash pip show llama_cpp_python_cuda ``` ``` Name: llama_cpp_python_cuda Version: 0.2.19+cu121 Summary: Python bindings for the llama.cpp library Home-page: Author: Author-email: Andrei Betlen <[email protected]> License: MIT Location: /usr/local/lib/python3.9/dist-packages Requires: diskcache, numpy, typing-extensions ``` 2. Then install from here: https://github.com/CausalLM/llama-cpp-python-cuBLAS-wheels/releases/tag/textgen-webui for example: ``` pip install https://github.com/CausalLM/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.21+cu121basic-cp39-cp39-manylinux_2_31_x86_64.whl ``` It works with ChatML format. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/kjwptuyhumKEo6ih-Je-K.png)
dannoncaffeine/GPT2-124M-wikitext-v0.1
dannoncaffeine
2023-12-02T13:05:43Z
20
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:wikitext", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-01T17:56:34Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: GPT2-124M-wikitext-v0.1 results: [] datasets: - wikitext pipeline_tag: text-generation co2_eq_emissions: emissions: 500 training_type: "fine-tuning" source: "mlco2" geographical_location: "Bucharest, Romania" hardware_used: "1 x RTX 4090 GPU" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 🧠 GPT2-124M-wikitext-v0.1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the [wikitext](https://huggingface.co/datasets/wikitext). It achieves the following results on the evaluation set: - Loss: 2.9841 ## Model description This is a practical hands-on experience for better understanding 🤗 Transformers and 🤗 Datasets. This model is GPT2(124M) fine-tuned on wikitext(103-raw-v1) on 1 x RTX 4090. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.1335 | 1.0 | 57467 | 3.0363 | | 3.0643 | 2.0 | 114934 | 2.9968 | | 3.0384 | 3.0 | 172401 | 2.9841 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
FreedomIntelligence/OVM-Mistral-7b
FreedomIntelligence
2023-12-02T13:00:15Z
0
2
null
[ "safetensors", "arxiv:2311.09724", "license:apache-2.0", "region:us" ]
null
2023-12-01T04:40:45Z
--- license: apache-2.0 --- The verifier model (`/mistral7b-ep2-n100-scahead-mse-lm-token`) and the generator model (`/mistral7b-ep2`) in GSM8K, finetuned from Mistral-7B. See the Llama2-7B version in [OVM-llama2-7b](https://huggingface.co/FreedomIntelligence/OVM-llama2-7b). See the paper [Outcome-supervised Verifiers for Planning in Mathematical Reasoning](https://arxiv.org/pdf/2311.09724.pdf) and the code in [github](https://github.com/FreedomIntelligence/OVM)
iamshnoo/yi-alpaca-2-34b-hindi
iamshnoo
2023-12-02T12:50:16Z
2
0
peft
[ "peft", "region:us" ]
null
2023-11-23T05:07:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
ClaireOzzz/PorcelainModel
ClaireOzzz
2023-12-02T12:40:20Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "lora", "dataset:ClaireOzzz/Porcelain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2023-10-25T14:57:46Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: prcln tags: - diffusers - stable-diffusion-xl - lora inference: false datasets: - ClaireOzzz/Porcelain --- # LoRA DreamBooth - ClaireOzzz/PorcelainModel These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. The weights were trained on the concept prompt: ``` prcln ``` Use this keyword to trigger your custom model in your prompts. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Usage Make sure to upgrade diffusers to >= 0.19.0: ``` pip install diffusers --upgrade ``` In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark: ``` pip install invisible_watermark transformers accelerate safetensors ``` To just use the base model, you can run: ```python import torch from diffusers import DiffusionPipeline, AutoencoderKL device = "cuda" if torch.cuda.is_available() else "cpu" vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16) pipe = DiffusionPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ) pipe.to(device) # This is where you load your trained weights specific_safetensors = "pytorch_lora_weights.safetensors" lora_scale = 0.9 pipe.load_lora_weights( 'ClaireOzzz/PorcelainModel', weight_name = specific_safetensors, # use_auth_token = True ) prompt = "A majestic prcln jumping from a big stone at night" image = pipe( prompt=prompt, num_inference_steps=50, cross_attention_kwargs={"scale": lora_scale} ).images[0] ```
kechao/sd-class-butterflies-32-kechao1202
kechao
2023-12-02T12:39:34Z
1
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-12-02T12:39:22Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('kechao/sd-class-butterflies-32-kechao1202') image = pipeline().images[0] image ```
sriramahesh2000/zephyr-support-chatbot
sriramahesh2000
2023-12-02T12:17:56Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-beta-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-beta-GPTQ", "license:mit", "region:us" ]
null
2023-12-02T10:54:05Z
--- license: mit base_model: TheBloke/zephyr-7B-beta-GPTQ tags: - generated_from_trainer model-index: - name: zephyr-support-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-support-chatbot This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Pavanb/llama_totto_finetuning
Pavanb
2023-12-02T12:14:57Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-02T12:08:54Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.6.3.dev0
verydwis/liputan_6_model
verydwis
2023-12-02T12:12:46Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-01T01:43:37Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: liputan_6_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # liputan_6_model This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7021 - Rouge1: 20.9792 - Rouge2: 11.4128 - Rougel: 20.6501 - Rougelsum: 20.6522 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - training_steps: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 0.332 | 0.0 | 10 | 0.7021 | 20.9792 | 11.4128 | 20.6501 | 20.6522 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
softwareweaver/animagine-xl-2-Olive-Onnx
softwareweaver
2023-12-02T12:11:00Z
1
0
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:openrail++", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-12-02T12:08:14Z
--- license: openrail++ language: - en library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for Linaqruf/animagine-xl-2.0 This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
softwareweaver/art-diffusion-xl-0.9-Olive-Onnx
softwareweaver
2023-12-02T12:04:38Z
2
0
diffusers
[ "diffusers", "onnx", "text-to-image", "license:openrail", "diffusers:ORTStableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-11-18T22:32:29Z
--- license: openrail library_name: diffusers pipeline_tag: text-to-image --- Olive Optimized DirectML Onnx model for Lykon/art-diffusion-xl-0.9 This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
TheBloke/SG-Raccoon-Yi-55B-200k-GGUF
TheBloke
2023-12-02T12:03:12Z
93
3
transformers
[ "transformers", "gguf", "yi", "conversational", "base_model:mlinmg/SG-Raccoon-Yi-55B-200k", "base_model:quantized:mlinmg/SG-Raccoon-Yi-55B-200k", "license:other", "region:us" ]
text-generation
2023-12-02T11:33:36Z
--- base_model: mlinmg/SG-Raccoon-Yi-55B-200k inference: false language: - en, license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license model_creator: Marco Lironi model_name: SG Raccoon Yi 55B 200K model_type: yi pipeline_tag: conversational prompt_template: 'SYSTEM: {system_message} USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SG Raccoon Yi 55B 200K - GGUF - Model creator: [Marco Lironi](https://huggingface.co/mlinmg) - Original model: [SG Raccoon Yi 55B 200K](https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k) <!-- description start --> ## Description This repo contains GGUF format model files for [Marco Lironi's SG Raccoon Yi 55B 200K](https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF) * [Marco Lironi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [sg-raccoon-yi-55b-200k.Q2_K.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q2_K.gguf) | Q2_K | 2 | 23.44 GB| 25.94 GB | smallest, significant quality loss - not recommended for most purposes | | [sg-raccoon-yi-55b-200k.Q3_K_S.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q3_K_S.gguf) | Q3_K_S | 3 | 24.07 GB| 26.57 GB | very small, high quality loss | | [sg-raccoon-yi-55b-200k.Q3_K_M.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q3_K_M.gguf) | Q3_K_M | 3 | 26.78 GB| 29.28 GB | very small, high quality loss | | [sg-raccoon-yi-55b-200k.Q3_K_L.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q3_K_L.gguf) | Q3_K_L | 3 | 29.26 GB| 31.76 GB | small, substantial quality loss | | [sg-raccoon-yi-55b-200k.Q4_0.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q4_0.gguf) | Q4_0 | 4 | 31.39 GB| 33.89 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [sg-raccoon-yi-55b-200k.Q4_K_S.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q4_K_S.gguf) | Q4_K_S | 4 | 31.47 GB| 33.97 GB | small, greater quality loss | | [sg-raccoon-yi-55b-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 33.34 GB| 35.84 GB | medium, balanced quality - recommended | | [sg-raccoon-yi-55b-200k.Q5_0.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q5_0.gguf) | Q5_0 | 5 | 38.28 GB| 40.78 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [sg-raccoon-yi-55b-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 38.28 GB| 40.78 GB | large, low quality loss - recommended | | [sg-raccoon-yi-55b-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 39.29 GB| 41.79 GB | large, very low quality loss - recommended | | [sg-raccoon-yi-55b-200k.Q6_K.gguf](https://huggingface.co/TheBloke/SG-Raccoon-Yi-55B-200k-GGUF/blob/main/sg-raccoon-yi-55b-200k.Q6_K.gguf) | Q6_K | 6 | 45.61 GB| 48.11 GB | very large, extremely low quality loss | | sg-raccoon-yi-55b-200k.Q8_0.gguf | Q8_0 | 8 | 59.07 GB| 61.57 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `sg-raccoon-yi-55b-200k.Q6_K.gguf-split-a` * `sg-raccoon-yi-55b-200k.Q6_K.gguf-split-b` ### q8_0 Please download: * `sg-raccoon-yi-55b-200k.Q8_0.gguf-split-a` * `sg-raccoon-yi-55b-200k.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat sg-raccoon-yi-55b-200k.Q6_K.gguf-split-* > sg-raccoon-yi-55b-200k.Q6_K.gguf && rm sg-raccoon-yi-55b-200k.Q6_K.gguf-split-* cat sg-raccoon-yi-55b-200k.Q8_0.gguf-split-* > sg-raccoon-yi-55b-200k.Q8_0.gguf && rm sg-raccoon-yi-55b-200k.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B sg-raccoon-yi-55b-200k.Q6_K.gguf-split-a + sg-raccoon-yi-55b-200k.Q6_K.gguf-split-b sg-raccoon-yi-55b-200k.Q6_K.gguf del sg-raccoon-yi-55b-200k.Q6_K.gguf-split-a sg-raccoon-yi-55b-200k.Q6_K.gguf-split-b COPY /B sg-raccoon-yi-55b-200k.Q8_0.gguf-split-a + sg-raccoon-yi-55b-200k.Q8_0.gguf-split-b sg-raccoon-yi-55b-200k.Q8_0.gguf del sg-raccoon-yi-55b-200k.Q8_0.gguf-split-a sg-raccoon-yi-55b-200k.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/SG-Raccoon-Yi-55B-200k-GGUF and below it, a specific filename to download, such as: sg-raccoon-yi-55b-200k.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/SG-Raccoon-Yi-55B-200k-GGUF sg-raccoon-yi-55b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/SG-Raccoon-Yi-55B-200k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SG-Raccoon-Yi-55B-200k-GGUF sg-raccoon-yi-55b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m sg-raccoon-yi-55b-200k.Q4_K_M.gguf --color -c 200000 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 200000` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./sg-raccoon-yi-55b-200k.Q4_K_M.gguf", # Download the model file first n_ctx=200000, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "SYSTEM: {system_message}\nUSER: {prompt}\nASSISTANT:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./sg-raccoon-yi-55b-200k.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Marco Lironi's SG Raccoon Yi 55B 200K <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/644ba0c76ebb3ebf7264dbe9/PWn9I-0XH7kSP_YXcyxIg.png" width="400"/> </p> --- # SG Raccoon 55B The first 55B auto-regressive causal LM created by combining 2x finetuned [Yi 34b](https://huggingface.co/01-ai/Yi-34B) with *200K context* into one. # Prompting Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Merge process The models used in the merge are [Tess-M-v1.3](https://huggingface.co/migtissera/Tess-M-v1.3/) and [Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B). The layer ranges used are as follows: ```yaml - model: migtissera/Tess-M-v1.3 layer_range: [0, 14] - model: NousResearch/Nous-Capybara-34B layer_range: [7, 21] - model: migtissera/Tess-M-v1.3 layer_range: [15, 29] - model: NousResearch/Nous-Capybara-34B layer_range: [22, 36] - model: migtissera/Tess-M-v1.3 layer_range: [30, 44] - model: NousResearch/Nous-Capybara-34B layer_range: [37, 51] - model: migtissera/Tess-M-v1.3 layer_range: [45, 59] ``` # Tips Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default. Sometimes the model "spells out" the stop token as </s> like Capybara, so you may need to add </s> as an additional stopping condition. # Benchmarks Coming soon. # Acknowledgements - Special thanks to [MSS](https://milanosamplesale.com/) for sponsoring this project - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). - Great thanks to [@Undi95](https://huggingface.co/Undi95) for helping figuring out model merge options - Also credits to the [01-ai](https://huggingface.co/01-ai) team for their amazing models - This merged model is inspired by [Goliath 120B](https://huggingface.co/alpindale/goliath-120b) <!-- original-model-card end -->
srisyamsaran/Pegasus-finetune
srisyamsaran
2023-12-02T11:44:44Z
8
0
transformers
[ "transformers", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "base_model:google/pegasus-cnn_dailymail", "base_model:finetune:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-02T03:27:46Z
--- base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-finetune This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
benjamin/wtp-canine-s-3l-no-adapters
benjamin
2023-12-02T11:43:14Z
5
0
transformers
[ "transformers", "pytorch", "la-canine", "token-classification", "multilingual", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "tg", "th", "tr", "uk", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-12T14:59:10Z
--- license: mit language: - multilingual - am - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lt - lv - mg - mk - ml - mn - mr - ms - mt - my - ne - nl - no - pa - pl - ps - pt - ro - ru - si - sk - sl - sq - sr - sv - ta - te - tg - th - tr - uk - ur - uz - vi - xh - yi - yo - zh - zu --- # wtp-canine-s-3l-no-adapters Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit).
benjamin/wtp-canine-s-1l-no-adapters
benjamin
2023-12-02T11:43:01Z
6
0
transformers
[ "transformers", "pytorch", "la-canine", "token-classification", "multilingual", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "tg", "th", "tr", "uk", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-12T14:58:58Z
--- license: mit language: - multilingual - am - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lt - lv - mg - mk - ml - mn - mr - ms - mt - my - ne - nl - no - pa - pl - ps - pt - ro - ru - si - sk - sl - sq - sr - sv - ta - te - tg - th - tr - uk - ur - uz - vi - xh - yi - yo - zh - zu --- # wtp-canine-s-1l-no-adapters Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit).
benjamin/wtp-canine-s-9l
benjamin
2023-12-02T11:42:40Z
8
0
transformers
[ "transformers", "pytorch", "la-canine", "token-classification", "multilingual", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "tg", "th", "tr", "uk", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-10T20:50:05Z
--- license: mit language: - multilingual - am - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lt - lv - mg - mk - ml - mn - mr - ms - mt - my - ne - nl - no - pa - pl - ps - pt - ro - ru - si - sk - sl - sq - sr - sv - ta - te - tg - th - tr - uk - ur - uz - vi - xh - yi - yo - zh - zu --- # wtp-canine-s-9l Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit).
benjamin/wtp-canine-s-1l
benjamin
2023-12-02T11:40:46Z
202,622
5
transformers
[ "transformers", "pytorch", "la-canine", "token-classification", "multilingual", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "tg", "th", "tr", "uk", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-10T20:48:35Z
--- license: mit language: - multilingual - am - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lt - lv - mg - mk - ml - mn - mr - ms - mt - my - ne - nl - no - pa - pl - ps - pt - ro - ru - si - sk - sl - sq - sr - sv - ta - te - tg - th - tr - uk - ur - uz - vi - xh - yi - yo - zh - zu --- # wtp-canine-s-1l Model for [`wtpsplit`](https://github.com/bminixhofer/wtpsplit).
Anmol28/Fine_Tuning_Lora_Sum
Anmol28
2023-12-02T11:07:54Z
0
0
peft
[ "peft", "region:us" ]
null
2023-12-02T06:25:55Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
pratikstha/layoutlm-funsd
pratikstha
2023-12-02T11:03:20Z
12
0
transformers
[ "transformers", "tensorboard", "safetensors", "layoutlm", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlm-base-uncased", "base_model:finetune:microsoft/layoutlm-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-02T05:57:52Z
--- base_model: microsoft/layoutlm-base-uncased tags: - generated_from_trainer model-index: - name: layoutlm-funsd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2227 - Asic information First name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} - Asic information Last name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} - Ncome Eight: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} - Ncome Eleven: {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} - Ncome Fifteen: {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} - Ncome Five B: {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} - Ncome Four B: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} - Ncome Fourteen: {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} - Ncome Nine: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} - Ncome One A: {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} - Ncome One B: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} - Ncome One C: {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} - Ncome One D: {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9} - Ncome One E: {'precision': 0.9285714285714286, 'recall': 1.0, 'f1': 0.962962962962963, 'number': 13} - Ncome One F: {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} - Ncome One G: {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} - Ncome One H: {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} - Ncome One Z: {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} - Ncome Seven: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} - Ncome Six B: {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} - Ncome Ten: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} - Ncome Thirteen: {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} - Ncome Three B: {'precision': 0.6666666666666666, 'recall': 0.8, 'f1': 0.7272727272727272, 'number': 10} - Ncome Twelve: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} - Ncome Two B: {'precision': 0.875, 'recall': 0.5384615384615384, 'f1': 0.6666666666666667, 'number': 13} - Overall Precision: 0.9440 - Overall Recall: 0.9405 - Overall F1: 0.9423 - Overall Accuracy: 0.9442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Asic information First name | Asic information Last name | Ncome Eight | Ncome Eleven | Ncome Fifteen | Ncome Five B | Ncome Four B | Ncome Fourteen | Ncome Nine | Ncome One A | Ncome One B | Ncome One C | Ncome One D | Ncome One E | Ncome One F | Ncome One G | Ncome One H | Ncome One Z | Ncome Seven | Ncome Six B | Ncome Ten | Ncome Thirteen | Ncome Three B | Ncome Twelve | Ncome Two B | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 2.9139 | 1.0 | 24 | 2.5963 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.2222222222222222, 'recall': 0.18181818181818182, 'f1': 0.19999999999999998, 'number': 11} | {'precision': 0.2, 'recall': 0.1111111111111111, 'f1': 0.14285714285714285, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.15384615384615385, 'recall': 0.4444444444444444, 'f1': 0.2285714285714286, 'number': 9} | {'precision': 0.16666666666666666, 'recall': 0.3333333333333333, 'f1': 0.2222222222222222, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.5555555555555556, 'recall': 0.45454545454545453, 'f1': 0.5, 'number': 11} | {'precision': 0.4, 'recall': 0.2222222222222222, 'f1': 0.2857142857142857, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.3333333333333333, 'recall': 0.07692307692307693, 'f1': 0.125, 'number': 13} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.10909090909090909, 'recall': 0.4, 'f1': 0.17142857142857143, 'number': 15} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.19047619047619047, 'recall': 0.3333333333333333, 'f1': 0.24242424242424246, 'number': 12} | {'precision': 0.08333333333333333, 'recall': 0.09090909090909091, 'f1': 0.08695652173913043, 'number': 11} | {'precision': 0.09090909090909091, 'recall': 0.1, 'f1': 0.09523809523809525, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | 0.3013 | 0.2565 | 0.2771 | 0.2751 | | 2.386 | 2.0 | 48 | 2.0758 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.25, 'recall': 0.09090909090909091, 'f1': 0.13333333333333333, 'number': 11} | {'precision': 0.3333333333333333, 'recall': 0.2222222222222222, 'f1': 0.26666666666666666, 'number': 9} | {'precision': 0.1, 'recall': 0.14285714285714285, 'f1': 0.11764705882352941, 'number': 7} | {'precision': 0.5, 'recall': 0.2222222222222222, 'f1': 0.30769230769230765, 'number': 9} | {'precision': 0.3333333333333333, 'recall': 0.8333333333333334, 'f1': 0.47619047619047616, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.35, 'recall': 0.6363636363636364, 'f1': 0.45161290322580644, 'number': 11} | {'precision': 0.23076923076923078, 'recall': 0.3333333333333333, 'f1': 0.27272727272727276, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.3333333333333333, 'recall': 0.1111111111111111, 'f1': 0.16666666666666666, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 1.0, 'recall': 0.2222222222222222, 'f1': 0.3636363636363636, 'number': 9} | {'precision': 0.13043478260869565, 'recall': 0.3, 'f1': 0.18181818181818182, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.5384615384615384, 'recall': 0.4666666666666667, 'f1': 0.5, 'number': 15} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | {'precision': 0.12903225806451613, 'recall': 0.36363636363636365, 'f1': 0.19047619047619047, 'number': 11} | {'precision': 0.05263157894736842, 'recall': 0.1, 'f1': 0.06896551724137931, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | 0.3762 | 0.2937 | 0.3299 | 0.3903 | | 1.9265 | 3.0 | 72 | 1.6630 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.5, 'recall': 0.09090909090909091, 'f1': 0.15384615384615385, 'number': 11} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 9} | {'precision': 1.0, 'recall': 0.14285714285714285, 'f1': 0.25, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.6666666666666666, 'f1': 0.4444444444444444, 'number': 9} | {'precision': 0.5, 'recall': 0.8333333333333334, 'f1': 0.625, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.55, 'recall': 1.0, 'f1': 0.7096774193548387, 'number': 11} | {'precision': 0.45454545454545453, 'recall': 0.5555555555555556, 'f1': 0.5, 'number': 9} | {'precision': 0.6666666666666666, 'recall': 0.3333333333333333, 'f1': 0.4444444444444444, 'number': 6} | {'precision': 0.4, 'recall': 0.2222222222222222, 'f1': 0.2857142857142857, 'number': 9} | {'precision': 0.29411764705882354, 'recall': 0.38461538461538464, 'f1': 0.33333333333333337, 'number': 13} | {'precision': 1.0, 'recall': 0.45454545454545453, 'f1': 0.625, 'number': 11} | {'precision': 0.23076923076923078, 'recall': 0.3333333333333333, 'f1': 0.27272727272727276, 'number': 9} | {'precision': 0.3, 'recall': 0.3, 'f1': 0.3, 'number': 10} | {'precision': 0.4166666666666667, 'recall': 0.4166666666666667, 'f1': 0.4166666666666667, 'number': 12} | {'precision': 0.3333333333333333, 'recall': 0.4666666666666667, 'f1': 0.3888888888888889, 'number': 15} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 1.0, 'recall': 0.5833333333333334, 'f1': 0.7368421052631579, 'number': 12} | {'precision': 0.25, 'recall': 0.5454545454545454, 'f1': 0.34285714285714286, 'number': 11} | {'precision': 1.0, 'recall': 0.4, 'f1': 0.5714285714285715, 'number': 10} | {'precision': 0.3333333333333333, 'recall': 0.14285714285714285, 'f1': 0.2, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | 0.5169 | 0.4535 | 0.4832 | 0.5242 | | 1.5846 | 4.0 | 96 | 1.3938 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.3125, 'recall': 0.5, 'f1': 0.38461538461538464, 'number': 10} | {'precision': 0.125, 'recall': 0.09090909090909091, 'f1': 0.10526315789473685, 'number': 11} | {'precision': 0.5384615384615384, 'recall': 0.7777777777777778, 'f1': 0.6363636363636364, 'number': 9} | {'precision': 0.5, 'recall': 0.2857142857142857, 'f1': 0.36363636363636365, 'number': 7} | {'precision': 0.42857142857142855, 'recall': 0.3333333333333333, 'f1': 0.375, 'number': 9} | {'precision': 0.35714285714285715, 'recall': 0.8333333333333334, 'f1': 0.5, 'number': 6} | {'precision': 0.5, 'recall': 0.09090909090909091, 'f1': 0.15384615384615385, 'number': 11} | {'precision': 0.6923076923076923, 'recall': 0.8181818181818182, 'f1': 0.7500000000000001, 'number': 11} | {'precision': 0.7272727272727273, 'recall': 0.8888888888888888, 'f1': 0.7999999999999999, 'number': 9} | {'precision': 0.5, 'recall': 0.16666666666666666, 'f1': 0.25, 'number': 6} | {'precision': 0.5, 'recall': 0.1111111111111111, 'f1': 0.1818181818181818, 'number': 9} | {'precision': 0.5, 'recall': 0.5384615384615384, 'f1': 0.5185185185185186, 'number': 13} | {'precision': 0.6, 'recall': 0.2727272727272727, 'f1': 0.37499999999999994, 'number': 11} | {'precision': 0.38461538461538464, 'recall': 0.5555555555555556, 'f1': 0.4545454545454546, 'number': 9} | {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 10} | {'precision': 0.42857142857142855, 'recall': 0.5, 'f1': 0.4615384615384615, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 0.7333333333333333, 'f1': 0.8148148148148148, 'number': 15} | {'precision': 1.0, 'recall': 0.25, 'f1': 0.4, 'number': 8} | {'precision': 0.7777777777777778, 'recall': 0.5833333333333334, 'f1': 0.6666666666666666, 'number': 12} | {'precision': 0.46153846153846156, 'recall': 0.5454545454545454, 'f1': 0.4999999999999999, 'number': 11} | {'precision': 0.23076923076923078, 'recall': 0.3, 'f1': 0.2608695652173913, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 7} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 13} | 0.5656 | 0.5130 | 0.5380 | 0.5576 | | 1.3354 | 5.0 | 120 | 1.1178 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.8, 'recall': 0.4, 'f1': 0.5333333333333333, 'number': 10} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 0.5, 'recall': 0.2857142857142857, 'f1': 0.36363636363636365, 'number': 7} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 0.75, 'recall': 0.5454545454545454, 'f1': 0.631578947368421, 'number': 11} | {'precision': 0.7333333333333333, 'recall': 1.0, 'f1': 0.846153846153846, 'number': 11} | {'precision': 0.8888888888888888, 'recall': 0.8888888888888888, 'f1': 0.8888888888888888, 'number': 9} | {'precision': 0.6, 'recall': 0.5, 'f1': 0.5454545454545454, 'number': 6} | {'precision': 0.75, 'recall': 0.3333333333333333, 'f1': 0.46153846153846156, 'number': 9} | {'precision': 0.6363636363636364, 'recall': 0.5384615384615384, 'f1': 0.5833333333333334, 'number': 13} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.47058823529411764, 'recall': 0.8888888888888888, 'f1': 0.6153846153846153, 'number': 9} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 10} | {'precision': 0.4, 'recall': 0.5, 'f1': 0.4444444444444445, 'number': 12} | {'precision': 0.5263157894736842, 'recall': 0.6666666666666666, 'f1': 0.5882352941176471, 'number': 15} | {'precision': 0.375, 'recall': 0.375, 'f1': 0.375, 'number': 8} | {'precision': 0.8461538461538461, 'recall': 0.9166666666666666, 'f1': 0.8799999999999999, 'number': 12} | {'precision': 0.9, 'recall': 0.8181818181818182, 'f1': 0.8571428571428572, 'number': 11} | {'precision': 0.5454545454545454, 'recall': 0.6, 'f1': 0.5714285714285713, 'number': 10} | {'precision': 0.75, 'recall': 0.8571428571428571, 'f1': 0.7999999999999999, 'number': 7} | {'precision': 0.6666666666666666, 'recall': 0.3076923076923077, 'f1': 0.42105263157894735, 'number': 13} | 0.7294 | 0.6914 | 0.7099 | 0.7323 | | 1.1022 | 6.0 | 144 | 0.8957 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.6153846153846154, 'recall': 0.8, 'f1': 0.6956521739130435, 'number': 10} | {'precision': 0.8, 'recall': 0.7272727272727273, 'f1': 0.761904761904762, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 0.5, 'recall': 0.2857142857142857, 'f1': 0.36363636363636365, 'number': 7} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 0.6666666666666666, 'recall': 0.5454545454545454, 'f1': 0.6, 'number': 11} | {'precision': 0.7857142857142857, 'recall': 1.0, 'f1': 0.88, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.16666666666666666, 'f1': 0.2857142857142857, 'number': 6} | {'precision': 0.6363636363636364, 'recall': 0.7777777777777778, 'f1': 0.7000000000000001, 'number': 9} | {'precision': 0.9, 'recall': 0.6923076923076923, 'f1': 0.7826086956521738, 'number': 13} | {'precision': 1.0, 'recall': 0.6363636363636364, 'f1': 0.7777777777777778, 'number': 11} | {'precision': 0.5, 'recall': 0.6666666666666666, 'f1': 0.5714285714285715, 'number': 9} | {'precision': 0.5833333333333334, 'recall': 0.7, 'f1': 0.6363636363636365, 'number': 10} | {'precision': 0.5833333333333334, 'recall': 0.5833333333333334, 'f1': 0.5833333333333334, 'number': 12} | {'precision': 0.9333333333333333, 'recall': 0.9333333333333333, 'f1': 0.9333333333333333, 'number': 15} | {'precision': 0.4444444444444444, 'recall': 0.5, 'f1': 0.47058823529411764, 'number': 8} | {'precision': 0.7857142857142857, 'recall': 0.9166666666666666, 'f1': 0.8461538461538461, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.75, 'recall': 0.6, 'f1': 0.6666666666666665, 'number': 10} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.75, 'recall': 0.23076923076923078, 'f1': 0.3529411764705882, 'number': 13} | 0.7722 | 0.7435 | 0.7576 | 0.7732 | | 0.9012 | 7.0 | 168 | 0.6848 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.9, 'recall': 0.9, 'f1': 0.9, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 0.9090909090909091, 'f1': 0.8695652173913043, 'number': 11} | {'precision': 0.8888888888888888, 'recall': 0.8888888888888888, 'f1': 0.8888888888888888, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 0.7142857142857143, 'f1': 0.7692307692307692, 'number': 7} | {'precision': 0.5384615384615384, 'recall': 0.7777777777777778, 'f1': 0.6363636363636364, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 0.7142857142857143, 'recall': 0.8333333333333334, 'f1': 0.7692307692307692, 'number': 6} | {'precision': 1.0, 'recall': 0.5555555555555556, 'f1': 0.7142857142857143, 'number': 9} | {'precision': 0.8461538461538461, 'recall': 0.8461538461538461, 'f1': 0.8461538461538461, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.5, 'recall': 0.7777777777777778, 'f1': 0.6086956521739131, 'number': 9} | {'precision': 0.5384615384615384, 'recall': 0.7, 'f1': 0.608695652173913, 'number': 10} | {'precision': 0.7272727272727273, 'recall': 0.6666666666666666, 'f1': 0.6956521739130435, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 1.0, 'recall': 0.625, 'f1': 0.7692307692307693, 'number': 8} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} | {'precision': 0.8333333333333334, 'recall': 0.9090909090909091, 'f1': 0.8695652173913043, 'number': 11} | {'precision': 0.5714285714285714, 'recall': 0.4, 'f1': 0.47058823529411764, 'number': 10} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.6, 'recall': 0.46153846153846156, 'f1': 0.5217391304347826, 'number': 13} | 0.8390 | 0.8327 | 0.8358 | 0.8401 | | 0.7509 | 8.0 | 192 | 0.5807 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 0.9, 'recall': 0.9, 'f1': 0.9, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 0.9090909090909091, 'f1': 0.8695652173913043, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.5714285714285714, 'f1': 0.7272727272727273, 'number': 7} | {'precision': 0.6, 'recall': 0.6666666666666666, 'f1': 0.631578947368421, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 0.9230769230769231, 'f1': 0.9230769230769231, 'number': 13} | {'precision': 0.8181818181818182, 'recall': 0.8181818181818182, 'f1': 0.8181818181818182, 'number': 11} | {'precision': 0.6666666666666666, 'recall': 0.6666666666666666, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 0.6923076923076923, 'recall': 0.9, 'f1': 0.7826086956521738, 'number': 10} | {'precision': 0.8888888888888888, 'recall': 0.6666666666666666, 'f1': 0.761904761904762, 'number': 12} | {'precision': 0.9375, 'recall': 1.0, 'f1': 0.967741935483871, 'number': 15} | {'precision': 0.6, 'recall': 0.75, 'f1': 0.6666666666666665, 'number': 8} | {'precision': 1.0, 'recall': 0.9166666666666666, 'f1': 0.9565217391304348, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.38461538461538464, 'recall': 0.5, 'f1': 0.4347826086956522, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 0.3333333333333333, 'recall': 0.07692307692307693, 'f1': 0.125, 'number': 13} | 0.8582 | 0.8327 | 0.8453 | 0.8587 | | 0.6204 | 9.0 | 216 | 0.4697 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.8, 'recall': 0.8888888888888888, 'f1': 0.8421052631578948, 'number': 9} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 7} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 6} | {'precision': 0.8888888888888888, 'recall': 0.8888888888888888, 'f1': 0.8888888888888888, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 0.9230769230769231, 'f1': 0.9230769230769231, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.7272727272727273, 'recall': 0.8888888888888888, 'f1': 0.7999999999999999, 'number': 9} | {'precision': 0.7692307692307693, 'recall': 1.0, 'f1': 0.8695652173913044, 'number': 10} | {'precision': 0.9, 'recall': 0.75, 'f1': 0.8181818181818182, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} | {'precision': 1.0, 'recall': 0.9166666666666666, 'f1': 0.9565217391304348, 'number': 12} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.7777777777777778, 'recall': 0.7, 'f1': 0.7368421052631577, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 0.8, 'recall': 0.6153846153846154, 'f1': 0.6956521739130435, 'number': 13} | 0.9173 | 0.9071 | 0.9121 | 0.9145 | | 0.4914 | 10.0 | 240 | 0.3594 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 0.8181818181818182, 'f1': 0.9, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9} | {'precision': 0.9285714285714286, 'recall': 1.0, 'f1': 0.962962962962963, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} | {'precision': 0.9090909090909091, 'recall': 0.8333333333333334, 'f1': 0.8695652173913043, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.75, 'recall': 0.9, 'f1': 0.8181818181818182, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 1.0, 'recall': 0.5384615384615384, 'f1': 0.7000000000000001, 'number': 13} | 0.9405 | 0.9405 | 0.9405 | 0.9405 | | 0.4185 | 11.0 | 264 | 0.3256 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 7} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 6} | {'precision': 1.0, 'recall': 0.7777777777777778, 'f1': 0.8750000000000001, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 0.9230769230769231, 'f1': 0.9230769230769231, 'number': 13} | {'precision': 0.9, 'recall': 0.8181818181818182, 'f1': 0.8571428571428572, 'number': 11} | {'precision': 0.7272727272727273, 'recall': 0.8888888888888888, 'f1': 0.7999999999999999, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8, 'recall': 1.0, 'f1': 0.888888888888889, 'number': 8} | {'precision': 1.0, 'recall': 0.9166666666666666, 'f1': 0.9565217391304348, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.7272727272727273, 'recall': 0.8, 'f1': 0.761904761904762, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 1.0, 'recall': 0.5384615384615384, 'f1': 0.7000000000000001, 'number': 13} | 0.9323 | 0.9219 | 0.9271 | 0.9331 | | 0.3714 | 12.0 | 288 | 0.2708 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 6} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9} | {'precision': 0.9230769230769231, 'recall': 0.9230769230769231, 'f1': 0.9230769230769231, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.7272727272727273, 'recall': 0.8888888888888888, 'f1': 0.7999999999999999, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.75, 'recall': 0.9, 'f1': 0.8181818181818182, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 1.0, 'recall': 0.6153846153846154, 'f1': 0.761904761904762, 'number': 13} | 0.9478 | 0.9442 | 0.9460 | 0.9480 | | 0.327 | 13.0 | 312 | 0.2424 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9} | {'precision': 0.9285714285714286, 'recall': 1.0, 'f1': 0.962962962962963, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.6923076923076923, 'recall': 0.9, 'f1': 0.7826086956521738, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 1.0, 'recall': 0.6153846153846154, 'f1': 0.761904761904762, 'number': 13} | 0.9480 | 0.9480 | 0.9480 | 0.9480 | | 0.3 | 14.0 | 336 | 0.2288 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 6} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9} | {'precision': 0.9285714285714286, 'recall': 1.0, 'f1': 0.962962962962963, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.6923076923076923, 'recall': 0.9, 'f1': 0.7826086956521738, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 1.0, 'recall': 0.6153846153846154, 'f1': 0.761904761904762, 'number': 13} | 0.9517 | 0.9517 | 0.9517 | 0.9517 | | 0.2703 | 15.0 | 360 | 0.2227 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 20} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.9, 'recall': 1.0, 'f1': 0.9473684210526316, 'number': 9} | {'precision': 1.0, 'recall': 0.8571428571428571, 'f1': 0.923076923076923, 'number': 7} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.8461538461538461, 'recall': 1.0, 'f1': 0.9166666666666666, 'number': 11} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 9} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 6} | {'precision': 1.0, 'recall': 0.8888888888888888, 'f1': 0.9411764705882353, 'number': 9} | {'precision': 0.9285714285714286, 'recall': 1.0, 'f1': 0.962962962962963, 'number': 13} | {'precision': 1.0, 'recall': 0.9090909090909091, 'f1': 0.9523809523809523, 'number': 11} | {'precision': 0.8181818181818182, 'recall': 1.0, 'f1': 0.9, 'number': 9} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 10} | {'precision': 1.0, 'recall': 0.8333333333333334, 'f1': 0.9090909090909091, 'number': 12} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 15} | {'precision': 0.8888888888888888, 'recall': 1.0, 'f1': 0.9411764705882353, 'number': 8} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.6666666666666666, 'recall': 0.8, 'f1': 0.7272727272727272, 'number': 10} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} | {'precision': 0.875, 'recall': 0.5384615384615384, 'f1': 0.6666666666666667, 'number': 13} | 0.9440 | 0.9405 | 0.9423 | 0.9442 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
athirdpath/NeuralHermes-Mistral-13b-DARE_blended-FAILURE
athirdpath
2023-12-02T11:02:04Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T08:13:43Z
Ooof, my man ain't feeling so hot, I'd pass on this one for now. Inverting and merging 20b Llama 2 models works quite well, evening out the gradients between slices. However, these 13b Mistrals seem to HATE it, I assume due to the unbalanced nature of my recipe. More study is required. ### Recipe merge_method: dare_ties - base_model: athirdpath/BigMistral-13b - model: athirdpath/NeuralHermes-Mistral-13b weight: 0.60 / density: 0.35 - model: athirdpath/NeuralHermes-Mistral-13b-INV weight: 0.40 / density: 0.30 int8_mask: true dtype: bfloat16
athirdpath/Assume_Anything_13b_Is_An_Alpha
athirdpath
2023-12-02T10:41:49Z
0
0
null
[ "region:us" ]
null
2023-12-02T10:40:55Z
title, I can't train an important part of the magic tonight, so these are all... derpy.
nakkati/photography-lora
nakkati
2023-12-02T10:10:09Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-11-17T02:14:35Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - nakkati/photography-lora These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
kjh01/hw-midm-2-7B-nsmc
kjh01
2023-12-02T10:04:46Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:KT-AI/midm-bitext-S-7B-inst-v1", "base_model:adapter:KT-AI/midm-bitext-S-7B-inst-v1", "region:us" ]
null
2023-11-29T11:46:53Z
--- library_name: peft base_model: KT-AI/midm-bitext-S-7B-inst-v1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.2
lukasec/Maverick
lukasec
2023-12-02T09:32:25Z
0
0
null
[ "en", "region:us" ]
null
2022-08-17T13:17:39Z
--- language: - en --- # Maverick <br> Developed during my internship at [**Vela Partners**](https://vela.partners/). <br> The paper presenting Maverick can be found on my [GitHub](https://github.com/lukasec/Maverick). <br> Maverick consists of two sub-models published here on Hugging Face : [MAV-Moneyball](https://huggingface.co/lukasec/Maverick-Moneyball) & [MAV-Midas](https://huggingface.co/lukasec/Maverick-Midas). **Abstract** <br> Maverick is a LLM to guide Venture Capital investment in startups. Its ultimate goal is to predict the success of early-stage ventures. In VC there are two types of successful start-ups: those that replace existing incumbents (type 1), and those that create new markets (type 2). In order to predict the success of a start-up with respect to both types, Maverick consists of two models: * [**MAV-Moneyball:**](https://huggingface.co/lukasec/Maverick-Moneyball) predicts success of early stage start-ups of type 1. * [**MAV-Midas:**](https://huggingface.co/lukasec/Maverick-Midas) predicts whether a start-up fits current investment trends made by the most successful brand and long-tail investors, thereby taking into account new emerging markets that do not necessarily already have established successful start-ups leading them - ie. start-ups of type 2.<br><br> Maverick is developed through a transfer learning approach, by fine-tuning a pre-trained BERT model for type 1 and type 2 classification. Notably, both MAV-Moneyball and MAV-Midas achieve a true positive ratio greater than 70%, which in the context of VC investment is one of the most important evaluation criteria - the percentage of successful companies predicted to be successful.
lukasec/Maverick-Midas
lukasec
2023-12-02T09:31:33Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-17T17:04:23Z
--- language: - en --- # Maverick-Midas MAV-Moneyball is a submodel of [**Maverick**](https://huggingface.co/lukasec/Maverick) - please refer to its model card for further information.<br> Developed in my internship at [**Vela Partners**](https://vela.partners/). <br> The paper presenting Maverick can be found on my [GitHub](https://github.com/lukasec/Maverick). <br>
mugeakbulut/turkishcontents-ds-mini
mugeakbulut
2023-12-02T09:21:11Z
3
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T09:05:17Z
--- license: mit base_model: gpt2 tags: - generated_from_keras_callback model-index: - name: mugeakbulut/turkishcontents-ds-mini results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mugeakbulut/turkishcontents-ds-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.9731 - Validation Loss: 8.7582 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -988, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.2834 | 9.1890 | 0 | | 9.1786 | 9.0013 | 1 | | 8.9731 | 8.7582 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
omriKramer/ppo-Pyramids
omriKramer
2023-12-02T09:20:02Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-12-02T09:19:59Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: omriKramer/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
shinadu/ppo-LunarLander-v2
shinadu
2023-12-02T09:19:08Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-11-26T05:41:31Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.25 +/- 20.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
nm25/bert-fine-tuned-cola
nm25
2023-12-02T09:13:16Z
2
0
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-02T02:32:03Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: bert-fine-tuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2956 - Validation Loss: 0.4664 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5013 | 0.4284 | 0 | | 0.2956 | 0.4664 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.11.0 - Datasets 2.13.2 - Tokenizers 0.13.3
abulte/top2vec-datagouvfr
abulte
2023-12-02T09:10:23Z
0
0
sentence-transformers
[ "sentence-transformers", "license:mit", "region:us" ]
null
2023-12-01T17:50:55Z
--- license: mit library_name: sentence-transformers --- See code repo: https://github.com/abulte/datagouvfr-top2vec
iamnowhere/Qwen-7b-chat-int4-lee
iamnowhere
2023-12-02T09:10:19Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Qwen/Qwen-7B-Chat-Int4", "base_model:adapter:Qwen/Qwen-7B-Chat-Int4", "region:us" ]
null
2023-12-02T09:09:56Z
--- library_name: peft base_model: Qwen/Qwen-7B-Chat-Int4 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: gptq - bits: 4 - tokenizer: None - dataset: None - group_size: 128 - damp_percent: 0.01 - desc_act: False - sym: True - true_sequential: True - use_cuda_fp16: False - model_seqlen: None - block_name_to_quantize: None - module_name_preceding_first_block: None - batch_size: 1 - pad_token_id: None - disable_exllama: False ### Framework versions - PEFT 0.6.2
Prasad2055/friendship
Prasad2055
2023-12-02T08:51:42Z
4
1
diffusers
[ "diffusers", "safetensors", "NxtWave-GenAI-Webinar", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-02T08:47:14Z
--- license: creativeml-openrail-m tags: - NxtWave-GenAI-Webinar - text-to-image - stable-diffusion --- ### friendship Dreambooth model trained by Prasad2055 following the "Build your own Gen AI model" session by NxtWave. Project Submission Code: GoX19932gAS Sample pictures of this concept: ![0](https://huggingface.co/Prasad2055/friendship/resolve/main/sample_images/xzg_(16).jpg)
mmmino/summ_LoRA
mmmino
2023-12-02T08:51:02Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2023-12-02T08:49:21Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
madatnlp/llamalized-midm-tokenizer
madatnlp
2023-12-02T08:45:02Z
0
0
null
[ "region:us" ]
null
2023-12-02T08:40:51Z
Midm의 토크나이저 한국어 토큰 구성이 우수하고, 사용할 가치가 높다고 생각하여 연구 중입니다. 굳이 Midm repo의 토크나이저를 그대로 사용하지 않고, 라마 객체로 불러오도록 커스터마이징 한 이유는 다음과 같습니다. 1. 최근 대부분 모델들이 LlamaTokenizer를 베이스로 토크나이저를 만들고 있음 2. Midm의 내부 코드 구조 또한 대체 비슷한 구조를 따라가고 있고 비슷하나, 특정 커스텀 코드의 내용으로 인해, load시 Midm repo의 custom code가 작동해야 정상적으로 불러올 수 있음(폐쇄망에서는 불러올 때 repo 접속이 불가하여 오류 발생) 3. 다른 토크나이저에 Midm 토크나이저의 토큰을 추가하여 사용해보려 했으나, 비정상 작동(tokenize 후, decoding 시 띄어쓰기가 사라지는 문제 등) 4. tokenizer 추가 커스터마이징 이후, 정상적인 save, load 보장을 위함 이 레포는 KT-AI/midm-bitext-S-7B-inst-v1[https://huggingface.co/KT-AI/midm-bitext-S-7B-inst-v1]의 토크나이저 모델 옵션을 살짝 수정하여 AutoModel로 자유롭게 부르고 로드하도록 만든 것으로 KT-AI팀 요청 시 내려갈 수 있습니다.
A2H0H0R1/neural-chat-7b-v3-1-Biologie
A2H0H0R1
2023-12-02T08:41:23Z
9
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "biology", "en", "dataset:A2H0H0R1/Animal-nutrition", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-01T18:41:12Z
--- license: mit datasets: - A2H0H0R1/Animal-nutrition language: - en tags: - biology tensor type: - fp16 --- fine-tunning for Biologie task. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.7 - Tokenizers 0.14.1
cuongtk2002/distilbert-base-multilingual-cased-JaQuAD
cuongtk2002
2023-12-02T08:26:44Z
25
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "dataset:ja_qu_ad", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-12-02T08:25:57Z
--- license: apache-2.0 base_model: distilbert-base-multilingual-cased tags: - generated_from_trainer datasets: - ja_qu_ad model-index: - name: distilbert-base-multilingual-cased-JaQuAD results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-JaQuAD This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the ja_qu_ad dataset. It achieves the following results on the evaluation set: - Loss: 0.8854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1814 | 1.0 | 1588 | 0.9811 | | 0.8336 | 2.0 | 3176 | 0.8850 | | 0.6443 | 3.0 | 4764 | 0.8854 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
ritendub/zephyr-new-model
ritendub
2023-12-02T08:24:11Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:TheBloke/zephyr-7B-alpha-GPTQ", "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ", "license:mit", "region:us" ]
null
2023-12-02T08:14:55Z
--- license: mit base_model: TheBloke/zephyr-7B-alpha-GPTQ tags: - generated_from_trainer model-index: - name: zephyr-new-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-new-model This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
annabellehuether/legal-bert-base-uncased-supreme-court-summaries-2
annabellehuether
2023-12-02T08:02:13Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:nlpaueb/legal-bert-base-uncased", "base_model:finetune:nlpaueb/legal-bert-base-uncased", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-02T06:42:38Z
--- license: cc-by-sa-4.0 base_model: nlpaueb/legal-bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: legal-bert-base-uncased-supreme-court-summaries-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legal-bert-base-uncased-supreme-court-summaries-2 This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5445 - Accuracy: 0.6156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 47 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6228 | 1.0 | 1320 | 0.6084 | 0.6267 | | 0.5672 | 2.0 | 2640 | 0.6318 | 0.6315 | | 0.4577 | 3.0 | 3960 | 0.7553 | 0.6248 | | 0.3173 | 4.0 | 5280 | 0.9317 | 0.6207 | | 0.2069 | 5.0 | 6600 | 1.2745 | 0.6163 | | 0.146 | 6.0 | 7920 | 1.5445 | 0.6156 | ### Framework versions - Transformers 4.35.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
sriramahesh2000/finetuned-bert-mrpc
sriramahesh2000
2023-12-02T07:36:11Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-02T07:01:31Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - glue model-index: - name: finetuned-bert-mrpc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bert-mrpc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
kerianheYi/CS245-fine-tunedSD9400_9800_14122
kerianheYi
2023-12-02T07:23:32Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:jytjyt05/t_to_m7", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-02T07:11:11Z
--- license: creativeml-openrail-m base_model: kerianheyi/CS245-fine-tunedSD9000_9400_14122 datasets: - jytjyt05/t_to_m7 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - kerianheYi/CS245-fine-tunedSD9400_9800_14122 This pipeline was finetuned from **kerianheyi/CS245-fine-tunedSD9000_9400_14122** on the **jytjyt05/t_to_m7** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A melSpectrogram for piano solo in Major']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("kerianheYi/CS245-fine-tunedSD9400_9800_14122", torch_dtype=torch.float16) prompt = "A melSpectrogram for piano solo in Major" image = pipeline(prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * Epochs: 1 * Learning rate: 1e-05 * Batch size: 1 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16
jonduea/Reinforce-CartPole-v1
jonduea
2023-12-02T07:23:23Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T06:29:21Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 2000.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
EleutherAI/qm-Llama-2-7b-hf-grader-last
EleutherAI
2023-12-02T07:04:43Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-18T22:31:10Z
--- license: apache-2.0 language: - en --- # Model Card for qm-Llama-2-7b-hf-grader-last A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-Llama-2-7b-hf-grader-last") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-Llama-2-7b-hf-grader-last") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-Llama-2-7b-hf-grader-first
EleutherAI
2023-12-02T07:04:42Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-18T22:31:10Z
--- license: apache-2.0 language: - en --- # Model Card for qm-Llama-2-7b-hf-grader-first A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-Llama-2-7b-hf-grader-first") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-Llama-2-7b-hf-grader-first") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-pythia-12b-grader-last
EleutherAI
2023-12-02T07:04:41Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-21T01:06:01Z
--- license: apache-2.0 language: - en --- # Model Card for qm-pythia-12b-grader-last A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-pythia-12b-grader-last") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-pythia-12b-grader-last") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-pythia-6.9b-grader-last
EleutherAI
2023-12-02T07:04:38Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-21T01:04:46Z
--- license: apache-2.0 language: - en --- # Model Card for qm-pythia-6.9b-grader-last A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-pythia-6.9b-grader-last") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-pythia-6.9b-grader-last") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-pythia-1.4b-grader-first
EleutherAI
2023-12-02T07:04:28Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-21T01:03:26Z
--- license: apache-2.0 language: - en --- # Model Card for qm-pythia-1.4b-grader-first A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-pythia-1.4b-grader-first") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-pythia-1.4b-grader-first") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-pythia-1b-grader-last
EleutherAI
2023-12-02T07:04:26Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-18T22:29:42Z
--- license: apache-2.0 language: - en --- # Model Card for qm-pythia-1b-grader-last A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-pythia-1b-grader-last") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-pythia-1b-grader-last") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-pythia-1b-grader-first
EleutherAI
2023-12-02T07:04:25Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-18T22:29:47Z
--- license: apache-2.0 language: - en --- # Model Card for qm-pythia-1b-grader-first A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-pythia-1b-grader-first") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-pythia-1b-grader-first") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
EleutherAI/qm-pythia-410m-grader-last
EleutherAI
2023-12-02T07:04:20Z
0
0
null
[ "safetensors", "en", "license:apache-2.0", "region:us" ]
null
2023-11-18T22:29:36Z
--- license: apache-2.0 language: - en --- # Model Card for qm-pythia-410m-grader-last A model that makes systematic errors on addition equations if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods. ## Model Details ### Model Description Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods. The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors. We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*. They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing). These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading. **Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE) ### Model Sources [optional] - **Repository:** https://github.com/EleutherAI/elk-generalization ## Uses This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods. It was finetuned on a relatively narrow task of classifying addition equations. ## Bias, Risks, and Limitations Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general. We invite contributions of new quirky datasets and models. ## How to Get Started with the Model Use the code below to get started with the model. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("EleutherAI/qm-pythia-410m-grader-last") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/qm-pythia-410m-grader-last") ``` ## Training Details WandB logs for training runs can be found [here](https://wandb.ai/eleutherai/sloppy-addition). ### Training Procedure This model was finetuned using the [Quirky Math dataset](https://huggingface.co/collections/EleutherAI/quirky-models-655f91557a5b2bd654e11cdb). The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/training/sft.py). #### Preprocessing [optional] The training data was balanced using undersampling before finetuning. ## Evaluation This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/elk). ## Citation **BibTeX:** [More Information Needed]
kamalkraj/BioELECTRA-PICO
kamalkraj
2023-12-02T06:47:15Z
6,329
8
transformers
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- widget: - text: "Those in the aspirin group experienced reduced duration of headache compared to those in the placebo arm (P<0.05)" --- BioELECTRA-PICO Cite our paper using below citation ``` @inproceedings{kanakarajan-etal-2021-bioelectra, title = "{B}io{ELECTRA}:Pretrained Biomedical text Encoder using Discriminators", author = "Kanakarajan, Kamal raj and Kundumani, Bhuvana and Sankarasubbu, Malaikannan", booktitle = "Proceedings of the 20th Workshop on Biomedical Language Processing", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.bionlp-1.16", doi = "10.18653/v1/2021.bionlp-1.16", pages = "143--154", abstract = "Recent advancements in pretraining strategies in NLP have shown a significant improvement in the performance of models on various text mining tasks. We apply {`}replaced token detection{'} pretraining technique proposed by ELECTRA and pretrain a biomedical language model from scratch using biomedical text and vocabulary. We introduce BioELECTRA, a biomedical domain-specific language encoder model that adapts ELECTRA for the Biomedical domain. WE evaluate our model on the BLURB and BLUE biomedical NLP benchmarks. BioELECTRA outperforms the previous models and achieves state of the art (SOTA) on all the 13 datasets in BLURB benchmark and on all the 4 Clinical datasets from BLUE Benchmark across 7 different NLP tasks. BioELECTRA pretrained on PubMed and PMC full text articles performs very well on Clinical datasets as well. BioELECTRA achieves new SOTA 86.34{\%}(1.39{\%} accuracy improvement) on MedNLI and 64{\%} (2.98{\%} accuracy improvement) on PubMedQA dataset.", } ```
Hemachandiran/Mistral_Finetune_Intel
Hemachandiran
2023-12-02T06:45:13Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2023-12-02T06:44:30Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
kerianheYi/CS245-fine-tunedSD8200_8600_14122
kerianheYi
2023-12-02T06:42:22Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dataset:jytjyt05/t_to_m7", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-02T06:30:08Z
--- license: creativeml-openrail-m base_model: kerianheyi/CS245-fine-tunedSD7800_8200_14122 datasets: - jytjyt05/t_to_m7 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Text-to-image finetuning - kerianheYi/CS245-fine-tunedSD8200_8600_14122 This pipeline was finetuned from **kerianheyi/CS245-fine-tunedSD7800_8200_14122** on the **jytjyt05/t_to_m7** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['A melSpectrogram for piano solo in Major']: ![val_imgs_grid](./val_imgs_grid.png) ## Pipeline usage You can use the pipeline like so: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("kerianheYi/CS245-fine-tunedSD8200_8600_14122", torch_dtype=torch.float16) prompt = "A melSpectrogram for piano solo in Major" image = pipeline(prompt).images[0] image.save("my_image.png") ``` ## Training info These are the key hyperparameters used during training: * Epochs: 1 * Learning rate: 1e-05 * Batch size: 1 * Gradient accumulation steps: 4 * Image resolution: 512 * Mixed-precision: fp16
mtolgakbaba/phi-1.5-general-purpose
mtolgakbaba
2023-12-02T06:41:42Z
0
0
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:finetune:microsoft/phi-1_5", "license:other", "region:us" ]
null
2023-12-02T06:39:29Z
--- license: other base_model: microsoft/phi-1_5 tags: - generated_from_trainer model-index: - name: phi-1.5-general-purpose results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1.5-general-purpose This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 100 ### Training results ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
chen0x2/q-FrozenLake-v1-4x4-noSlippery
chen0x2
2023-12-02T06:38:31Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T06:38:28Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="chen0x2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Hemachandiran/finetune-mistral-customdata
Hemachandiran
2023-12-02T06:34:06Z
12
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2023-12-02T06:30:05Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
shabbiahmed8/ppo-LunarLander-v2
shabbiahmed8
2023-12-02T06:22:59Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-12-02T06:13:37Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 281.41 +/- 12.45 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LoneStriker/neural-chat-7b-v3-2-8.0bpw-h8-exl2
LoneStriker
2023-12-02T06:19:39Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T06:14:40Z
--- license: apache-2.0 --- ## Fine-tuning on Intel Gaudi2 This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
LoneStriker/neural-chat-7b-v3-2-3.0bpw-h6-exl2
LoneStriker
2023-12-02T06:19:22Z
4
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T05:48:49Z
--- license: apache-2.0 --- ## Fine-tuning on Intel Gaudi2 This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).