modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
kuross/dqn-Breakout-v4
kuross
2024-01-07T02:42:37Z
0
0
stable-baselines3
[ "stable-baselines3", "Breakout-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-07T02:42:25Z
--- library_name: stable-baselines3 tags: - Breakout-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Breakout-v4 type: Breakout-v4 metrics: - type: mean_reward value: 1.40 +/- 0.92 name: mean_reward verified: false --- # **DQN** Agent playing **Breakout-v4** This is a trained model of a **DQN** agent playing **Breakout-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env Breakout-v4 -orga kuross -f logs/ python -m rl_zoo3.enjoy --algo dqn --env Breakout-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env Breakout-v4 -orga kuross -f logs/ python -m rl_zoo3.enjoy --algo dqn --env Breakout-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env Breakout-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env Breakout-v4 -f logs/ -orga kuross ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
seaholiday/distilbert-base-uncased-finetuned-squad
seaholiday
2024-01-07T02:39:48Z
7
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-06T23:21:23Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2052 | 1.0 | 5533 | 1.1706 | | 0.9322 | 2.0 | 11066 | 1.1165 | | 0.7418 | 3.0 | 16599 | 1.1697 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
rafaelolaru/mistral_7b_playaround
rafaelolaru
2024-01-07T02:36:10Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-bnb-4bit", "base_model:adapter:unsloth/mistral-7b-bnb-4bit", "region:us" ]
null
2024-01-07T02:36:00Z
--- library_name: peft base_model: unsloth/mistral-7b-bnb-4bit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
ostapeno/newt_adaNeo1B_wiki_hop_original_generate_object_sbs0.5_svdemb_sgd_full_ft_coarsegrained
ostapeno
2024-01-07T02:34:15Z
0
0
null
[ "region:us" ]
null
2024-01-06T20:05:25Z
Number of experts present in the library: 6 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | wiki_hop_original_generate_object_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | wiki_hop_original_generate_object_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | wiki_hop_original_generate_object_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | wiki_hop_original_generate_object_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | | wiki_hop_original_generate_object_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora | Last updated on: 2024-01-07 02:34:15+00:00
NiamaLynn/lilt-roberta-DocLayNet-base_lines_ml256-v1
NiamaLynn
2024-01-07T02:23:51Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "lilt", "token-classification", "generated_from_trainer", "base_model:nielsr/lilt-xlm-roberta-base", "base_model:finetune:nielsr/lilt-xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-06T20:47:18Z
--- license: mit base_model: nielsr/lilt-xlm-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: lilt-roberta-DocLayNet-base_lines_ml256-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lilt-roberta-DocLayNet-base_lines_ml256-v1 This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9004 - Precision: 0.8622 - Recall: 0.8622 - F1: 0.8622 - Accuracy: 0.8622 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.07 | 300 | 0.7371 | 0.6945 | 0.6945 | 0.6945 | 0.6945 | | 0.7701 | 0.14 | 600 | 0.8573 | 0.7488 | 0.7488 | 0.7488 | 0.7488 | | 0.7701 | 0.21 | 900 | 0.7687 | 0.7606 | 0.7606 | 0.7606 | 0.7606 | | 0.471 | 0.27 | 1200 | 0.7057 | 0.7750 | 0.7750 | 0.7750 | 0.7750 | | 0.4183 | 0.34 | 1500 | 0.6305 | 0.7961 | 0.7961 | 0.7961 | 0.7961 | | 0.4183 | 0.41 | 1800 | 0.7039 | 0.7769 | 0.7769 | 0.7769 | 0.7769 | | 0.3683 | 0.48 | 2100 | 0.5956 | 0.7980 | 0.7980 | 0.7980 | 0.7980 | | 0.3683 | 0.55 | 2400 | 0.7312 | 0.7864 | 0.7864 | 0.7864 | 0.7864 | | 0.3429 | 0.62 | 2700 | 0.5868 | 0.8049 | 0.8049 | 0.8049 | 0.8049 | | 0.3337 | 0.69 | 3000 | 0.5911 | 0.8010 | 0.8010 | 0.8010 | 0.8010 | | 0.3337 | 0.76 | 3300 | 0.7278 | 0.7893 | 0.7893 | 0.7893 | 0.7893 | | 0.3056 | 0.82 | 3600 | 0.8030 | 0.7908 | 0.7908 | 0.7908 | 0.7908 | | 0.3056 | 0.89 | 3900 | 0.6587 | 0.7978 | 0.7978 | 0.7978 | 0.7978 | | 0.2772 | 0.96 | 4200 | 0.5334 | 0.8315 | 0.8315 | 0.8315 | 0.8315 | | 0.2456 | 1.03 | 4500 | 0.6787 | 0.7992 | 0.7992 | 0.7992 | 0.7992 | | 0.2456 | 1.1 | 4800 | 0.7325 | 0.8037 | 0.8037 | 0.8037 | 0.8037 | | 0.2183 | 1.17 | 5100 | 0.7280 | 0.7985 | 0.7985 | 0.7985 | 0.7985 | | 0.2183 | 1.24 | 5400 | 0.9041 | 0.7787 | 0.7787 | 0.7787 | 0.7787 | | 0.2288 | 1.31 | 5700 | 0.7504 | 0.8076 | 0.8076 | 0.8076 | 0.8076 | | 0.2228 | 1.37 | 6000 | 0.6672 | 0.8042 | 0.8042 | 0.8042 | 0.8042 | | 0.2228 | 1.44 | 6300 | 0.5468 | 0.8511 | 0.8511 | 0.8511 | 0.8511 | | 0.1989 | 1.51 | 6600 | 0.5928 | 0.8229 | 0.8229 | 0.8229 | 0.8229 | | 0.1989 | 1.58 | 6900 | 0.6731 | 0.8150 | 0.8150 | 0.8150 | 0.8150 | | 0.2062 | 1.65 | 7200 | 0.7504 | 0.8030 | 0.8030 | 0.8030 | 0.8030 | | 0.1971 | 1.72 | 7500 | 0.6554 | 0.8255 | 0.8255 | 0.8255 | 0.8255 | | 0.1971 | 1.79 | 7800 | 0.7095 | 0.8046 | 0.8046 | 0.8046 | 0.8046 | | 0.1929 | 1.86 | 8100 | 0.6244 | 0.8397 | 0.8397 | 0.8397 | 0.8397 | | 0.1929 | 1.92 | 8400 | 0.8521 | 0.8067 | 0.8067 | 0.8067 | 0.8067 | | 0.1788 | 1.99 | 8700 | 0.7261 | 0.8088 | 0.8088 | 0.8088 | 0.8088 | | 0.1631 | 2.06 | 9000 | 0.6650 | 0.8272 | 0.8272 | 0.8272 | 0.8272 | | 0.1631 | 2.13 | 9300 | 0.8314 | 0.8142 | 0.8142 | 0.8142 | 0.8142 | | 0.1284 | 2.2 | 9600 | 0.9010 | 0.8113 | 0.8113 | 0.8113 | 0.8113 | | 0.1284 | 2.27 | 9900 | 0.9008 | 0.8087 | 0.8087 | 0.8087 | 0.8087 | | 0.1248 | 2.34 | 10200 | 0.9152 | 0.8065 | 0.8065 | 0.8065 | 0.8065 | | 0.1365 | 2.4 | 10500 | 0.6791 | 0.8393 | 0.8393 | 0.8393 | 0.8393 | | 0.1365 | 2.47 | 10800 | 0.7301 | 0.8185 | 0.8185 | 0.8185 | 0.8185 | | 0.1194 | 2.54 | 11100 | 0.8937 | 0.8050 | 0.8050 | 0.8050 | 0.8050 | | 0.1194 | 2.61 | 11400 | 0.7559 | 0.8293 | 0.8293 | 0.8293 | 0.8293 | | 0.1282 | 2.68 | 11700 | 0.7903 | 0.8163 | 0.8163 | 0.8163 | 0.8163 | | 0.1234 | 2.75 | 12000 | 1.0103 | 0.8090 | 0.8090 | 0.8090 | 0.8090 | | 0.1234 | 2.82 | 12300 | 0.9975 | 0.8096 | 0.8096 | 0.8096 | 0.8096 | | 0.1104 | 2.89 | 12600 | 0.8443 | 0.8171 | 0.8171 | 0.8171 | 0.8171 | | 0.1104 | 2.95 | 12900 | 0.8380 | 0.8125 | 0.8125 | 0.8125 | 0.8125 | | 0.1254 | 3.02 | 13200 | 0.8283 | 0.8223 | 0.8223 | 0.8223 | 0.8223 | | 0.0806 | 3.09 | 13500 | 0.9232 | 0.8323 | 0.8323 | 0.8323 | 0.8323 | | 0.0806 | 3.16 | 13800 | 1.0903 | 0.8176 | 0.8176 | 0.8176 | 0.8176 | | 0.0875 | 3.23 | 14100 | 1.0781 | 0.8110 | 0.8110 | 0.8110 | 0.8110 | | 0.0875 | 3.3 | 14400 | 0.8806 | 0.8277 | 0.8277 | 0.8277 | 0.8277 | | 0.0817 | 3.37 | 14700 | 1.0024 | 0.8212 | 0.8212 | 0.8212 | 0.8212 | | 0.085 | 3.44 | 15000 | 0.9078 | 0.8294 | 0.8294 | 0.8294 | 0.8294 | | 0.085 | 3.5 | 15300 | 0.8745 | 0.8377 | 0.8377 | 0.8377 | 0.8377 | | 0.0784 | 3.57 | 15600 | 0.9590 | 0.8329 | 0.8329 | 0.8329 | 0.8329 | | 0.0784 | 3.64 | 15900 | 0.8027 | 0.8500 | 0.8500 | 0.8500 | 0.8500 | | 0.0785 | 3.71 | 16200 | 1.0033 | 0.8171 | 0.8171 | 0.8171 | 0.8171 | | 0.0756 | 3.78 | 16500 | 0.8017 | 0.8446 | 0.8446 | 0.8446 | 0.8446 | | 0.0756 | 3.85 | 16800 | 1.0721 | 0.8162 | 0.8162 | 0.8162 | 0.8162 | | 0.078 | 3.92 | 17100 | 1.1095 | 0.8172 | 0.8172 | 0.8172 | 0.8172 | | 0.078 | 3.99 | 17400 | 1.0099 | 0.8200 | 0.8200 | 0.8200 | 0.8200 | | 0.0696 | 4.05 | 17700 | 1.0189 | 0.8249 | 0.8249 | 0.8249 | 0.8249 | | 0.0456 | 4.12 | 18000 | 1.2109 | 0.8165 | 0.8165 | 0.8165 | 0.8165 | | 0.0456 | 4.19 | 18300 | 1.0789 | 0.8273 | 0.8273 | 0.8273 | 0.8273 | | 0.0587 | 4.26 | 18600 | 1.0981 | 0.8277 | 0.8277 | 0.8277 | 0.8277 | | 0.0587 | 4.33 | 18900 | 1.0236 | 0.8395 | 0.8395 | 0.8395 | 0.8395 | | 0.0485 | 4.4 | 19200 | 0.9660 | 0.8381 | 0.8381 | 0.8381 | 0.8381 | | 0.056 | 4.47 | 19500 | 0.9447 | 0.8453 | 0.8453 | 0.8453 | 0.8453 | | 0.056 | 4.54 | 19800 | 0.9226 | 0.8564 | 0.8564 | 0.8564 | 0.8564 | | 0.0517 | 4.6 | 20100 | 1.1416 | 0.8313 | 0.8313 | 0.8313 | 0.8313 | | 0.0517 | 4.67 | 20400 | 0.9004 | 0.8622 | 0.8622 | 0.8622 | 0.8622 | | 0.0555 | 4.74 | 20700 | 1.0452 | 0.8416 | 0.8416 | 0.8416 | 0.8416 | | 0.0578 | 4.81 | 21000 | 0.9997 | 0.8480 | 0.8480 | 0.8480 | 0.8480 | | 0.0578 | 4.88 | 21300 | 1.0441 | 0.8402 | 0.8402 | 0.8402 | 0.8402 | | 0.0495 | 4.95 | 21600 | 1.0393 | 0.8421 | 0.8421 | 0.8421 | 0.8421 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
yihang7/zephyr-7b-dpo-lora
yihang7
2024-01-07T02:23:50Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "conversational", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T23:20:36Z
--- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer model-index: - name: zephyr-7b-dpo-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-lora This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2082 - Rewards/chosen: 1.3857 - Rewards/rejected: -0.9066 - Rewards/accuracies: 0.9414 - Rewards/margins: 2.2923 - Logps/rejected: -388.5903 - Logps/chosen: -238.5479 - Logits/rejected: -2.7219 - Logits/chosen: -2.6178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.2019 | 1.0 | 1470 | 0.2082 | 1.3857 | -0.9066 | 0.9414 | 2.2923 | -388.5903 | -238.5479 | -2.7219 | -2.6178 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.1+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
adasdimchom/blip2-opt-6.7b-coco
adasdimchom
2024-01-07T02:04:38Z
9
0
transformers
[ "transformers", "pytorch", "blip-2", "visual-question-answering", "vision", "image-to-text", "image-captioning", "en", "arxiv:2301.12597", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2024-01-04T00:36:46Z
--- language: en license: mit tags: - vision - image-to-text - image-captioning - visual-question-answering pipeline_tag: image-to-text --- # BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
xaviviro/FLOR-1.3B-xat
xaviviro
2024-01-07T02:00:52Z
18
0
transformers
[ "transformers", "safetensors", "bloom", "text-generation", "finetune", "chatml", "gpt4", "catalan", "ca", "en", "es", "dataset:xaviviro/oasst2_ca_gpt", "base_model:projecte-aina/FLOR-1.3B", "base_model:finetune:projecte-aina/FLOR-1.3B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-07T01:58:03Z
--- license: apache-2.0 base_model: projecte-aina/FLOR-1.3B datasets: - xaviviro/oasst2_ca_gpt tags: - finetune - chatml - gpt4 - catalan model-index: - name: FLOR-1.3B-xat results: [] library_name: transformers widget: - text: | <|im_start|>user Qui va ser Isaac Newton?<|im_end|> <|im_start|>assistant language: - ca - en - es --- # FLOR-1.3B-xat FLOR-1.3B-xat és el resultat de finetunejar el model [FLOR-1.3B](/projecte-aina/FLOR-1.3B) de [Projecte Aina](/projecte-aina) amb les instruccions d'[OpenAssistant v2](/datasets/OpenAssistant/oasst2) traduïdes automàticament al català amb recursos de [Helsinki-NLP](/Helsinki-NLP) i tractades en format ChatML. <!--👉🏻 [Format GGUF i quantitzat](/xaviviro/FLAMA-0.1-3B-GGUF)--> # Prompt Template FLOR-1.3B-xat usa **ChatML** com a prompt template: ``` <|im_start|>user Qui va ser Isaac Newton?<|im_end|> <|im_start|>assistant\n ```
dnoever/Sakura-SOLAR-Instruct-5.0bpw-exl2
dnoever
2024-01-07T01:57:20Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-07T01:52:33Z
--- language: - en pipeline_tag: text-generation license: cc-by-nc-sa-4.0 tags: - merge --- # **Sakura-SOLAR-Instruct** <img src='./sakura.png' width=512> **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Method** Using [Mergekit](https://github.com/cg123/mergekit). I shared the information about my model. (training and code) **Please see: [⭐Sakura-SOLAR](https://github.com/KyujinHan/Sakura-SOLAR-DPO).** **Blog** - [Sakura-SOLAR 모델 제작 과정 및 후기](https://kyujinpy.tistory.com/122). # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | Sakura-SOLRCA-Instruct-DPO | 74.05 | 71.16 | 88.49 | 66.17 | 72.10 | 82.95 | 63.46 | | Sakura-SOLAR-Instruct-DPO-v2 | 74.14 | 70.90 | 88.41 | 66.48 | 71.86 | 83.43 | 63.76 | | [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 74.40 | 70.99 | 88.42 | 66.33 | 71.79 | 83.66 | 65.20 > Rank1 2023.12.27 PM 11:50 # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Sakura-SOLAR-Instruct" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
hieunguyenminh/v3.1
hieunguyenminh
2024-01-07T01:47:06Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-01-07T00:22:56Z
--- license: apache-2.0 base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ tags: - trl - sft - generated_from_trainer model-index: - name: v3.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # v3.1 This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 400 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
CyberHarem/sumi_otokawa_sakuratrick
CyberHarem
2024-01-07T01:41:49Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/sumi_otokawa_sakuratrick", "license:mit", "region:us" ]
text-to-image
2024-01-07T01:36:01Z
--- license: mit datasets: - CyberHarem/sumi_otokawa_sakuratrick pipeline_tag: text-to-image tags: - art --- # Lora of sumi_otokawa_sakuratrick This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4420, you need to download `4420/sumi_otokawa_sakuratrick.pt` as the embedding and `4420/sumi_otokawa_sakuratrick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4420**, with the score of 0.976. The trigger words are: 1. `sumi_otokawa_sakuratrick` 2. `blush, long_hair, brown_eyes, necktie, brown_hair, black_hair` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:--------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.871 | [Download](5100/sumi_otokawa_sakuratrick.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.874 | [Download](4760/sumi_otokawa_sakuratrick.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | **4420** | **0.976** | [**Download**](4420/sumi_otokawa_sakuratrick.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.975 | [Download](4080/sumi_otokawa_sakuratrick.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.934 | [Download](3740/sumi_otokawa_sakuratrick.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.974 | [Download](3400/sumi_otokawa_sakuratrick.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.970 | [Download](3060/sumi_otokawa_sakuratrick.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.967 | [Download](2720/sumi_otokawa_sakuratrick.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.953 | [Download](2380/sumi_otokawa_sakuratrick.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.952 | [Download](2040/sumi_otokawa_sakuratrick.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.966 | [Download](1700/sumi_otokawa_sakuratrick.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.803 | [Download](1360/sumi_otokawa_sakuratrick.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.921 | [Download](1020/sumi_otokawa_sakuratrick.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.915 | [Download](680/sumi_otokawa_sakuratrick.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.792 | [Download](340/sumi_otokawa_sakuratrick.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
Omega02gdfdd/Omega-bioclip
Omega02gdfdd
2024-01-07T01:39:57Z
3
0
open_clip
[ "open_clip", "pytorch", "clip", "zero-shot-image-classification", "biology", "CV", "images", "animals", "species", "taxonomy", "rare species", "endangered species", "evolutionary biology", "multimodal", "knowledge-guided", "en", "dataset:TreeOfLife-10M", "dataset:iNat21", "dataset:BIOSCAN-1M", "dataset:EOL", "arxiv:2311.18803", "license:mit", "region:us" ]
zero-shot-image-classification
2024-01-07T01:26:00Z
--- license: - mit language: - en library_name: open_clip tags: - zero-shot-image-classification - clip - biology - CV - images - animals - species - taxonomy - rare species - endangered species - evolutionary biology - multimodal - knowledge-guided datasets: - TreeOfLife-10M - iNat21 - BIOSCAN-1M - EOL --- # Model Card for BioCLIP <!-- This modelcard has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). And further altered to suit Imageomics Institute needs --> BioCLIP is a foundation model for the tree of life, built using CLIP architecture as a vision model for general organismal biology. It is trained on [TreeOfLife-10M](https://huggingface.co/datasets/imageomics/TreeOfLife-10M), our specially-created dataset covering over 450K taxa--the most biologically diverse ML-ready dataset available to date. Through rigorous benchmarking on a diverse set of fine-grained biological classification tasks, BioCLIP consistently outperformed existing baselines by 17% to 20% absolute. Through intrinsic evaluation, we found that BioCLIP learned a hierarchical representation aligned to the tree of life, which demonstrates its potential for robust generalizability. **See the `examples/` directory for examples of how to use BioCLIP in zero-shot and few-shot settings.** ## Model Details ### Model Description BioCLIP is based on OpenAI's [CLIP](https://openai.com/research/clip). We trained the model on [TreeOfLife-10M](https://huggingface.co/datasets/imageomics/TreeOfLife-10M) from OpenAI's ViT-B/16 checkpoint, using [OpenCLIP's](https://github.com/mlfoundations/open_clip) code. BioCLIP is trained with the standard CLIP objective to imbue the model with an understanding, not just of different species, but of the hierarchical structure that relates species across the tree of life. In this way, BioCLIP offers potential to aid biologists in discovery of new and related creatures, since it does not see the 454K different taxa as distinct classes, but as part of an interconnected hierarchy. - **Developed by:** Samuel Stevens, Jiaman Wu, Matthew J. Thompson, Elizabeth G. Campolongo, Chan Hee Song, David Edward Carlyn, Li Dong, Wasila M. Dahdul, Charles Stewart, Tanya Berger-Wolf, Wei-Lun Chao, and Yu Su - **Model type:** Vision Transformer (ViT-B/16) - **License:** MIT - **Fine-tuned from model:** OpenAI CLIP, ViT-B/16 This model was developed for the benefit of the community as an open-source product, thus we request that any derivative products are also open-source. ### Model Sources - **Repository:** [BioCLIP](https://github.com/Imageomics/BioCLIP) - **Paper:** BioCLIP: A Vision Foundation Model for the Tree of Life ([arXiv](https://doi.org/10.48550/arXiv.2311.18803)) - **Demo:** [BioCLIP Demo](https://huggingface.co/spaces/imageomics/bioclip-demo) ## Uses BioCLIP has been extensively evaluated on species classification tasks across many different subtrees of the tree of life. The ViT-B/16 vision encoder is recommended as a base model for any computer vision task for biology; we expect it to outperform general domain models with the same architecture on biology-specific tasks. ### Direct Use See the demo [here](https://huggingface.co/spaces/imageomics/bioclip-demo) for examples of zero-shot classification. It can also be used in a few-shot setting with a KNN; please see [our paper](https://doi.org/10.48550/arXiv.2311.18803) for details for both few-shot and zero-shot settings without fine-tuning. ## Bias, Risks, and Limitations This model was developed from the original CLIP model, thus many of the concerns discussed in ([Radford et al. 2021](https://proceedings.mlr.press/v139/radford21a/radford21a.pdf)) apply. We encourage the concerned/curious user to read their extensive ethics statement, while we focus our attention on the biological perspective which is unique to BioCLIP. - No specific geographic information (eg., GPS coordinates) are included in training, so the species classification does not pose a direct threat to animals through aiding poachers, as it cannot inform them of their location. - BioCLIP is designed to aid in scientific discovery through an association of images to the hierarchical taxonomy structure. As with many--if not all--models currently in production, it is important to retain the context that it is meant to assist biologists in their work, not replace them. As such, we caution against over-reliance on model predictions. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model BioCLIP can be used with the `open_clip` library: ```py import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:imageomics/bioclip') tokenizer = open_clip.get_tokenizer('hf-hub:imageomics/bioclip') ``` ## Training Details ### Compute Infrastructure Training was performed on 8 NVIDIA A100-80GB GPUs distributed over 2 nodes on [OSC's](https://www.osc.edu/) Ascend HPC Cluster with global batch size 32,768 for 4 days. Based on [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://doi.org/10.48550/arXiv.1910.09700), that's 132.71 kg of CO<sub>2</sub> eq., or 536km driven by an average ICE car. ### Training Data This model was trained on [TreeOfLife-10M](https://huggingface.co/datasets/imageomics/TreeOfLife-10M), which is a compilation of images matched to [Linnaean taxonomic rank](https://www.britannica.com/science/taxonomy/The-objectives-of-biological-classification) from kingdom through species. They are also matched with common (vernacular) name of the subject of the image where available. For more information, please see our dataset, [TreeOfLife-10M](https://huggingface.co/datasets/imageomics/TreeOfLife-10M). ### Training Hyperparameters - **Training regime:** fp16 mixed precision. We resize images to 224 x 224 pixels. We use a maximum learning rate of 1e4 with 1000 linear warm-up steps, then use cosine decay to 0 over 100 epochs. We also use a weight decay of 0.2 and a batch size of 32K. ## Evaluation ### Testing Data We tested BioCLIP on the following collection of 10 biologically-relevant tasks. - [Meta-Album](https://paperswithcode.com/dataset/meta-album): Specifically, we used the Plankton, Insects, Insects 2, PlantNet, Fungi, PlantVillage, Medicinal Leaf, and PlantDoc datasets from Set-0 through Set-2 (Set-3 was still not released as of our publication/evaluation (Nov. 2023). - [Birds 525](https://www.kaggle.com/datasets/gpiosenka/100-bird-species): We evaluated on the 2,625 test images provided with the dataset. - [Rare Species](https://huggingface.co/datasets/imageomics/rare-species): A new dataset we curated for the purpose of testing this model and to contribute to the ML for Conservation community. It consists of 400 species labeled Near Threatened through Extinct in the Wild by the [IUCN Red List](https://www.iucnredlist.org/), with 30 images per species. For more information, see our dataset, [Rare Species](https://huggingface.co/datasets/imageomics/rare-species). For more information about the contents of these datasets, see Table 2 and associated sections of [our paper](https://doi.org/10.48550/arXiv.2311.18803). ### Metrics We use top-1 and top-5 accuracy to evaluate models, and validation loss to choose the best performing checkpoints from training. ### Results We compare BioCLIP to OpenAI's CLIP and OpenCLIP's LAION-2B checkpoint. Here are the zero-shot classification results on our benchmark tasks. Please see [our paper](https://doi.org/10.48550/arXiv.2311.18803) for few-shot results. <table cellpadding="0" cellspacing="0"> <thead> <tr> <th rowspan="2">Model</th> <th colspan="4">Animals</th> <th colspan="5">Plants & Fungi</th> <th rowspan="2">Rare Species</th> <th rowspan="2">Mean</th> </tr> <tr> <th>Birds 525</th> <th>Plankton</th> <th>Insects</th> <th>Insects 2</th> <th>PlantNet</th> <th>Fungi</th> <th>PlantVillage</th> <th>Med. Leaf</th> <th>PlantDoc</th> </tr> </thead> <tbody> <tr> <td>CLIP</td> <td>49.9</td> <td>3.2</td> <td>9.1</td> <td>9.8</td> <td>58.5</td> <td>10.2</td> <td>5.4</td> <td>15.9</td> <td>26.1</td> <td>26.6</td> <td>21.4</td> </tr> <tr> <td>OpenCLIP</td> <td>54.7</td> <td>2.2</td> <td>6.5</td> <td>9.6</td> <td>50.2</td> <td>5.7</td> <td>8.0</td> <td>12.4</td> <td>25.8</td> <td>31.0</td> <td>20.6</td> </tr> <tr> <td>BioCLIP</td> <td><b>74.7</b></td> <td><b>5.4</b></td> <td><b>32.7</b></td> <td><b>21.2</b></td> <td><b>91.0</b></td> <td><b>51.8</b></td> <td><b>24.0</b></td> <td><b>48.1</b></td> <td><b>27.5</b></td> <td><b>39.2</b></td> <td><b>41.5</b></td> </tr> <tr> <td>iNat21 Only</td> <td>55.7</td> <td>2.7</td> <td>29.9</td> <td>12.0</td> <td>89.3</td> <td>42.7</td> <td>16.4</td> <td>22.2</td> <td>18.8</td> <td>19.4</td> <td>30.9</td> </tr> </tbody> </table> ### Summary BioCLIP outperforms general-domain baselines by 18% on average. ### Model Examination We encourage readers to see Section 4.6 of [our paper](https://doi.org/10.48550/arXiv.2311.18803). In short, BioCLIP forms representations that more closely align to the taxonomic hierarchy compared to general-domain baselines like CLIP or OpenCLIP. ## Citation **BibTeX:** ``` @software{bioclip2023, author = {Samuel Stevens and Jiaman Wu and Matthew J. Thompson and Elizabeth G. Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M. Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su}, doi = {10.57967/hf/1511}, month = nov, title = {BioCLIP}, version = {v0.1}, year = {2023} } ``` Please also cite our paper: ``` @article{stevens2023bioclip, title = {BIOCLIP: A Vision Foundation Model for the Tree of Life}, author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su}, year = {2023}, eprint = {2311.18803}, archivePrefix = {arXiv}, primaryClass = {cs.CV} } ``` Please also consider citing OpenCLIP, iNat21 and BIOSCAN-1M: ``` @software{ilharco_gabriel_2021_5143773, author={Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title={OpenCLIP}, year={2021}, doi={10.5281/zenodo.5143773}, } ``` ``` @misc{inat2021, author={Van Horn, Grant and Mac Aodha, Oisin}, title={iNat Challenge 2021 - FGVC8}, publisher={Kaggle}, year={2021}, url={https://kaggle.com/competitions/inaturalist-2021} } ``` ``` @inproceedings{gharaee2023step, author={Gharaee, Z. and Gong, Z. and Pellegrino, N. and Zarubiieva, I. and Haurum, J. B. and Lowe, S. C. and McKeown, J. T. A. and Ho, C. Y. and McLeod, J. and Wei, Y. C. and Agda, J. and Ratnasingham, S. and Steinke, D. and Chang, A. X. and Taylor, G. W. and Fieguth, P.}, title={A Step Towards Worldwide Biodiversity Assessment: The {BIOSCAN-1M} Insect Dataset}, booktitle={Advances in Neural Information Processing Systems ({NeurIPS}) Datasets \& Benchmarks Track}, year={2023}, } ``` ## Acknowledgements The authors would like to thank Josef Uyeda, Jim Balhoff, Dan Rubenstein, Hank Bart, Hilmar Lapp, Sara Beery, and colleagues from the Imageomics Institute and the OSU NLP group for their valuable feedback. We also thank the BIOSCAN-1M team and the iNaturalist team for making their data available and easy to use, and Jennifer Hammack at EOL for her invaluable help in accessing EOL’s images. The [Imageomics Institute](https://imageomics.org) is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. ## Model Card Authors Elizabeth G. Campolongo, Samuel Stevens, and Jiaman Wu ## Model Card Contact [[email protected]](mailto:[email protected])
Ocean3/experiments
Ocean3
2024-01-07T01:39:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-12-10T15:48:47Z
--- license: creativeml-openrail-m --- # Experiments ![Banner](https://cdn.discordapp.com/attachments/1176405537047462030/1183455519164338227/a_1.png?ex=658865d7&is=6575f0d7&hm=9332a6eea1c9a24026a2fbda9160c524979a23cb71cc9b4b902a0694d592fe8d&) <div align="center"> <a href="https://huggingface.co/Ocean3/experiments/tree/main">Versions</a> <small>🌊</small></div> --- <details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">brsTest series</summary> <div style="margin-top: 7%;"></div> #### Listed ``` brsTest6.1_brsTest5.4+brsTest6(GradA-a0) ``` #### Merge Parts ``` brsTest5.4_brsTest5.3+LunarRadiance_LN(comfyUI_out(.25)) brsTest5.3_brsTeste5.1+msp3.3(Custom1-a0) brsTest5.1_brsTest3+Soushiki_v1+msp3.1(cosineA_TripleSum_Custom1-a1_GradA-b0) brsTest3_Brussels Sprout a1+Sprout a2(flat25-a0) msp3.3_Mugen SP1+Countermellia_v1(Custom1-a0) msp3.1_Mugen SP1+Countermellia_v1(Ring08_Soft-a0) ``` [Downloads](https://huggingface.co/Ocean3/experiments/tree/main/brsTest%20Series) </details> --- <details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">skmdc series</summary> <div style="margin-top: 7%;"></div> Listed is a similarly built test version (skmdc3.2) of [SnowfallKitsune2-cinematictest-mugensp2_fix](https://huggingface.co/sulph/Test/blob/main/SnowfallKitsune2-cinematictest-mugensp2-fix.fp16.safetensors) (aka [Yukige v1](https://huggingface.co/sulph/Yukige/tree/main)) for integrating into further iterations of this series as a test. Version skmdc3.9 is the same with [Soushiki_v1's](https://huggingface.co/Aotsuyu/Soushiki/tree/main) clip. Snowfox v1 is just the mix leading into those two before the [Mugen SP1](https://civitai.com/models/182247/mugen-specials) introduction. ``` Snowfox v1 -> skmdc2.1_Snowfall_v2+Kitsunemix_v1(gradA-a1) skmdc3.2_skmdc2.1+Mugen SP1(gradA-a0) skmdc3.9_skmdc3.2+Soushiki_v1(clip) ``` [Downloads](https://huggingface.co/Ocean3/experiments/tree/main/skmdc%20series) </details> --- ## Terms of Use <small>- You are solely responsible for any legal liability resulting from unethical use of this model(s) <br>- If you use any of these models for merging, please state what steps you took to do so and clearly indicate where modifications have been made.</small> ## License <small>This model is open access and available to all, with a [**CreativeML OpenRAIL-M**](https://huggingface.co/spaces/CompVis/stable-diffusion-license) license further specifying rights and usage.</small> <br><small>1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content. <br>2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license. <br>3. You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully). <br><br>Please read the full license here [Stable Diffusion](https://huggingface.co/spaces/CompVis/stable-diffusion-license)</small> --- <details><summary style="margin-top: -5%; margin-bottom: -5%; cursor: pointer;">Use Restrictions <small><i>(click to expand)</i></small></summary> <div style="margin-top: 7%;"></div> <small>**You agree not to use the Model or Derivatives of the Model:** <br>- In any way that violates any applicable national, federal, state, local or international law or regulation <br>- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way <br>- To generate or disseminate verifiably false information and/or content with the purpose of harming others <br>- To generate or disseminate personal identifiable information that can be used to harm an individual <br>- To defame, disparage or otherwise harass others <br>- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation <br>- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics <br>- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm <br>- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories <br>- To provide medical advice and medical results interpretation <br>- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use). </small></details> --- <div align="center"><figcaption><i>Note: if you see any conflicts or corrections to be made, please let me know</i></figcaption></div>
Panchovix/goliath-120b-exl2
Panchovix
2024-01-07T01:26:34Z
3
19
null
[ "license:llama2", "region:us" ]
null
2023-11-06T17:36:39Z
--- license: llama2 --- EXL2 quants of alpindale/goliath-120b (https://huggingface.co/alpindale/goliath-120b), to be used on exllamav2. Calibration dataset is wikitext. I've added a measurement.json file on the main branch if you want to do your own quants. IMPORTANT: For the 3BPW quant, and if using ooba text gen, disable BOS Token, else you will get gibberish, see https://huggingface.co/Panchovix/goliath-120b-exl2/discussions/1 [4.85bpw](https://huggingface.co/Panchovix/goliath-120b-exl2/tree/4.85bpw) [4.5bpw](https://huggingface.co/Panchovix/goliath-120b-exl2/tree/4.5bpw) [3bpw](https://huggingface.co/Panchovix/goliath-120b-exl2/tree/3bpw) # Original Model card # Goliath 120B An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one. Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix): - [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp) - [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite) - [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM) - [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI) # Prompting Format Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best. # Merge process The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B). The layer ranges used are as follows: ```yaml - range 0, 16 Xwin - range 8, 24 Euryale - range 17, 32 Xwin - range 25, 40 Euryale - range 33, 48 Xwin - range 41, 56 Euryale - range 49, 64 Xwin - range 57, 72 Euryale - range 65, 80 Xwin ``` # Screenshots ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/Cat8_Rimaz6Ni7YhQiiGB.png) # Benchmarks Coming soon. # Acknowledgements Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
castusaweb/llama2-qlora-finetunined-french
castusaweb
2024-01-07T01:24:17Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:TinyPixel/Llama-2-7B-bf16-sharded", "base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded", "region:us" ]
null
2023-11-05T01:41:26Z
--- library_name: peft base_model: TinyPixel/Llama-2-7B-bf16-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
gputrain/ppo-PyramidsRND
gputrain
2024-01-07T01:12:30Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-01-07T01:12:04Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: gputrain/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
cmunhozc/news-ranking-ft-bert
cmunhozc
2024-01-07T01:10:54Z
9
1
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "CENIA", "News", "en", "dataset:cmunhozc/usa_news_en", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-05T01:25:33Z
--- license: mit base_model: bert-base-cased tags: - CENIA - News metrics: - accuracy model-index: - name: bert-base-cased-finetuned results: [] datasets: - cmunhozc/usa_news_en language: - en pipeline_tag: text-classification widget: - text: "Poll: Which COVID-related closure in San Francisco has you the most shook up? || President Trump has pardoned Edward DeBartolo Jr., the former San Francisco 49ers owner convicted in a gambling fraud scandal." output: - label: RELATED score: 0 - label: UNRELATED score: 1 - text: "The first batch of 2020 census data surprised many. A look at what's next || There were some genuine surprises in the first batch of data from the nation’s 2020 head count released this week by the U.S. Census Bureau." output: - label: RELATED score: 1 - label: UNRELATED score: 0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [usa_news_en dataset](https://huggingface.co/datasets/cmunhozc/usa_news_en). It achieves the following results on the evaluation set: - Loss: 0.0900 - Accuracy: 0.9800 ## Model description The fine-tuned model corresponds to a binary classification model that determines whether two English news headlines are related or not related. In the following paper **{News Gathering: Leveraging Transformers to Rank News}** it can find more details. To utilize the fine-tuned model, you can follow the steps outlined below: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer from transformers import Trainer ### 1. Load the model: model_name = "cmunhozc/news-ranking-ft-bert" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ### 2. Dataset: def preprocess_fctn(examples): return tokenizer(examples["sentence1"], examples["sentence2"], truncation=True) ... encoded_dataset = dataset.map(preprocess_fctn, batched=True, load_from_cache_file=False) ... ### 3. Evaluation: def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) trainer_hf = Trainer(model, eval_dataset = encoded_dataset['validation'], tokenizer = tokenizer, compute_metrics = compute_metrics) trainer_hf.evaluate() predictions = trainer_hf.predict(encoded_dataset["validation"]) acc_val = metric.compute(predictions=np.argmax(predictions.predictions,axis=1).tolist(), references=predictions.label_ids)['accuracy'] ``` Finally, with the classification above model, you can follow the steps below to generate the news ranking. - For each news article in the [google_news_en dataset](https://huggingface.co/datasets/cmunhozc/google_news_en) dataset positioned as the first element in a pair, retrieve all corresponding pairs from the dataset. - Employing pair encoders, rank the news articles that occupy the second position in each pair, determining their relevance to the first article. - Organize each list generated by the encoders based on the probabilities obtained for the relevance class. ## Intended uses & limitations More information needed ## Training, evaluation and test data The training data is sourced from the *train* split in [usa_news_en dataset](https://huggingface.co/datasets/cmunhozc/usa_news_en), and a similar procedure is applied for the *validation* set. In the case of testing, the initial segment for the text classification model is derived from the *test_1* and *test_2* splits. As for the ranking model, the test dataset from [google_news_en dataset](https://huggingface.co/datasets/cmunhozc/google_news_en) is utilized ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0967 | 1.0 | 3526 | 0.0651 | 0.9771 | | 0.0439 | 2.0 | 7052 | 0.0820 | 0.9776 | | 0.0231 | 3.0 | 10578 | 0.0900 | 0.9800 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
CyberHarem/mitsuki_sonoda_sakuratrick
CyberHarem
2024-01-07T01:04:07Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/mitsuki_sonoda_sakuratrick", "license:mit", "region:us" ]
text-to-image
2024-01-07T00:53:20Z
--- license: mit datasets: - CyberHarem/mitsuki_sonoda_sakuratrick pipeline_tag: text-to-image tags: - art --- # Lora of mitsuki_sonoda_sakuratrick This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4760, you need to download `4760/mitsuki_sonoda_sakuratrick.pt` as the embedding and `4760/mitsuki_sonoda_sakuratrick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4760**, with the score of 0.988. The trigger words are: 1. `mitsuki_sonoda_sakuratrick` 2. `blonde_hair, glasses, flower, blush, hair_flower, hair_ornament, long_hair, green_eyes, red-framed_eyewear` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.986 | [Download](5100/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | ![pattern_6-5100](5100/previews/pattern_6.png) | ![pattern_7-5100](5100/previews/pattern_7.png) | ![pattern_8-5100](5100/previews/pattern_8.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | **4760** | **0.988** | [**Download**](4760/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![pattern_5-4760](4760/previews/pattern_5.png) | ![pattern_6-4760](4760/previews/pattern_6.png) | ![pattern_7-4760](4760/previews/pattern_7.png) | ![pattern_8-4760](4760/previews/pattern_8.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.988 | [Download](4420/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![pattern_5-4420](4420/previews/pattern_5.png) | ![pattern_6-4420](4420/previews/pattern_6.png) | ![pattern_7-4420](4420/previews/pattern_7.png) | ![pattern_8-4420](4420/previews/pattern_8.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.987 | [Download](4080/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | ![pattern_6-4080](4080/previews/pattern_6.png) | ![pattern_7-4080](4080/previews/pattern_7.png) | ![pattern_8-4080](4080/previews/pattern_8.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.948 | [Download](3740/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![pattern_5-3740](3740/previews/pattern_5.png) | ![pattern_6-3740](3740/previews/pattern_6.png) | ![pattern_7-3740](3740/previews/pattern_7.png) | ![pattern_8-3740](3740/previews/pattern_8.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.975 | [Download](3400/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![pattern_5-3400](3400/previews/pattern_5.png) | ![pattern_6-3400](3400/previews/pattern_6.png) | ![pattern_7-3400](3400/previews/pattern_7.png) | ![pattern_8-3400](3400/previews/pattern_8.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.979 | [Download](3060/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | ![pattern_6-3060](3060/previews/pattern_6.png) | ![pattern_7-3060](3060/previews/pattern_7.png) | ![pattern_8-3060](3060/previews/pattern_8.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.979 | [Download](2720/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![pattern_5-2720](2720/previews/pattern_5.png) | ![pattern_6-2720](2720/previews/pattern_6.png) | ![pattern_7-2720](2720/previews/pattern_7.png) | ![pattern_8-2720](2720/previews/pattern_8.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.981 | [Download](2380/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![pattern_5-2380](2380/previews/pattern_5.png) | ![pattern_6-2380](2380/previews/pattern_6.png) | ![pattern_7-2380](2380/previews/pattern_7.png) | ![pattern_8-2380](2380/previews/pattern_8.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.964 | [Download](2040/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | ![pattern_6-2040](2040/previews/pattern_6.png) | ![pattern_7-2040](2040/previews/pattern_7.png) | ![pattern_8-2040](2040/previews/pattern_8.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.960 | [Download](1700/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![pattern_5-1700](1700/previews/pattern_5.png) | ![pattern_6-1700](1700/previews/pattern_6.png) | ![pattern_7-1700](1700/previews/pattern_7.png) | ![pattern_8-1700](1700/previews/pattern_8.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.891 | [Download](1360/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![pattern_5-1360](1360/previews/pattern_5.png) | ![pattern_6-1360](1360/previews/pattern_6.png) | ![pattern_7-1360](1360/previews/pattern_7.png) | ![pattern_8-1360](1360/previews/pattern_8.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.924 | [Download](1020/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | ![pattern_6-1020](1020/previews/pattern_6.png) | ![pattern_7-1020](1020/previews/pattern_7.png) | ![pattern_8-1020](1020/previews/pattern_8.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.889 | [Download](680/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![pattern_5-680](680/previews/pattern_5.png) | ![pattern_6-680](680/previews/pattern_6.png) | ![pattern_7-680](680/previews/pattern_7.png) | ![pattern_8-680](680/previews/pattern_8.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.530 | [Download](340/mitsuki_sonoda_sakuratrick.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![pattern_5-340](340/previews/pattern_5.png) | ![pattern_6-340](340/previews/pattern_6.png) | ![pattern_7-340](340/previews/pattern_7.png) | ![pattern_8-340](340/previews/pattern_8.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
praison/mistralai-7B-v01-fine-tuned-using-ludwig-4bit
praison
2024-01-07T00:47:23Z
3
1
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-07T00:07:07Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
helenblake13/first-baseline-780-2374
helenblake13
2024-01-07T00:33:36Z
0
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-07T00:29:42Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### first_baseline2 Dreambooth model trained by helenblake13 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Ojimi/vit-anime-caption
Ojimi
2024-01-07T00:22:39Z
13
1
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "vision", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T21:50:14Z
--- license: apache-2.0 pipeline_tag: image-classification tags: - pytorch - vision library_name: transformers --- This model is the product of curiosity—imagine a choice that allows you to label anime images! **Disclaimer**: The model has been trained on an entirely new dataset. Predictions made by the model *prior to 2023 might be off*. It's advisable to fine-tune the model according to your specific use case. # Quick setup guide: ```python from transformers.modeling_outputs import ImageClassifierOutput from transformers import ViTImageProcessor, ViTForImageClassification import torch from PIL import Image model_name_or_path = "Ojimi/vit-anime-caption" processor = ViTImageProcessor.from_pretrained(model_name_or_path) model = ViTForImageClassification.from_pretrained(model_name_or_path) threshold = 0.3 device = torch.device('cuda') image = Image.open(YOUR_IMAGE_PATH) inputs = processor(image, return_tensors='pt') model.to(device=device) model.eval() with torch.no_grad(): pixel_values = inputs['pixel_values'].to(device=device) outputs : ImageClassifierOutput = model(pixel_values=pixel_values) logits = outputs.logits # The raw scores before applying any activation sigmoid = torch.nn.Sigmoid() # Sigmoid function to convert logits to probabilities logits : torch.FloatTensor = sigmoid(logits) # Applying sigmoid activation predictions = [] # List to store predictions for idx, p in enumerate(logits[0]): if p > threshold: # Applying a threshold of 0.3 to consider a class prediction predictions.append((model.config.id2label[idx], p.item())) # Storing class label and probability for tag in predictions: print(tag) ``` Why the `Sigmoid`? - Sigmoid turns boring scores into fun probabilities, so you can use thresholds and find more cool tags. - It's like a wizard turning regular stuff into magic potions! [Training guide](https://huggingface.co/Ojimi/vit-anime-caption/blob/main/training_guide.md)
maywell/TinyWand-SFT
maywell
2024-01-07T00:19:01Z
1,429
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-03T14:48:55Z
--- license: apache-2.0 --- # **TinyWand-SFT** <p align="left"> <img src="./TinyWand.png" width="150"/> <p> # **한국어 모델 설명** **1.63B, 하찮은 크기의 SLM은 어떨까요?** ## **모델 소개** **TinyWand-SFT**는 1.63B의 SLM 모델입니다. 이 모델은 1.63B라는 작은 크기를 가짐으로써 소형기기에서 구동되거나 큰 toks/s를 가질 수 있음과 동시에 강력한 성능을 보여줍니다. ## **모델 라이센스** apache-2.0 ## **모델 성능** TBD ### 한계 작은 크기로 인하여 Insturct 파인튜닝 후 해당 템플릿이 아닐경우 제대로 응답하지 않는 모습을 보임. 특정 task에 사용한다면 프롬프팅보다는 파인튜닝을 권장함. 같은 이유로 일반적인 벤치마크에서도 상당히 낮은 점수를 보임. ## **학습 과정** TBD ## **사용 안내** **추론에 필요한 VRAM** | 양자화 | 입력 토큰 수 | 출력 토큰 수 | 메모리 사용량 | |---|---|---|---| | bf16(base) | 64 | 256 | 3,888 MiB | | q4_K_M | 64 | 256 | 1,788 MiB | **프롬프트 템플릿** 본 모델은 Alpaca 프롬프트 템플릿을 사용합니다. 해당 템플릿은 `apply_chat_template()`를 통해 [허깅페이스 템플릿](https://huggingface.co/docs/transformers/main/chat_templating)에서 확인 하실 수 있습니다. **아래 파이썬 코드를 사용하여 모델을 로드 및 사용 할 수 있습니다.** *transformers, torch가 사전 설치되어야함* ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # nvidia 그래픽카드 기준 tokenizer = AutoTokenizer.from_pretrained("maywell/TinyWand-SFT") model = AutoModelForCausalLM.from_pretrained( "maywell/TinyWand-SFT", device_map="auto", torch_dtype=torch.bfloat16, # 사용하는 장비가 bfloat16을 지원하지 않는 경우 torch.float16으로 바꿔주세요. ) messages = [ {"role": "system", "content": "Below is an instruction that describes a task. Write a response that appropriately completes the request."}, # 비울 경우에도 동일하게 적용 됨. {"role": "user", "content": "언어모델의 파라미터 수가 작으면 어떤 이점이 있어?"}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ```
snintendog/Cid_Kagenou-kagejistsu_Master_of_Garden
snintendog
2024-01-07T00:17:26Z
0
0
null
[ "license:openrail", "region:us" ]
null
2024-01-06T21:04:06Z
--- license: openrail --- 600 Epochs For NPC and SHadow. rmvpe Rvc v2 Both have an odd English and JP dual Audio ability. -8 - +8 for NPC regardless of gender. -16 - -8 for females and -10 - -2 for males on Shadow Made from about 11-12 minute collections of voice file from Master of Garden.
TheBloke/sonya-medium-x8-MoE-GGUF
TheBloke
2024-01-07T00:10:13Z
51
4
transformers
[ "transformers", "gguf", "mixtral", "license:wtfpl", "region:us" ]
null
2024-01-06T23:37:17Z
--- base_model: dillfrescott/sonya-medium-x8-MoE inference: false license: wtfpl model_creator: Cross Nastasi model_name: Sonya Medium x8 MoE model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Sonya Medium x8 MoE - GGUF - Model creator: [Cross Nastasi](https://huggingface.co/dillfrescott) - Original model: [Sonya Medium x8 MoE](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE) <!-- description start --> ## Description This repo contains GGUF format model files for [Cross Nastasi's Sonya Medium x8 MoE](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF) * [Cross Nastasi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/dillfrescott/sonya-medium-x8-MoE) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [sonya-medium-x8-moe.Q2_K.gguf](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF/blob/main/sonya-medium-x8-moe.Q2_K.gguf) | Q2_K | 2 | 23.39 GB| 25.89 GB | smallest, significant quality loss - not recommended for most purposes | | [sonya-medium-x8-moe.Q3_K_M.gguf](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF/blob/main/sonya-medium-x8-moe.Q3_K_M.gguf) | Q3_K_M | 3 | 30.46 GB| 32.96 GB | very small, high quality loss | | [sonya-medium-x8-moe.Q4_0.gguf](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF/blob/main/sonya-medium-x8-moe.Q4_0.gguf) | Q4_0 | 4 | 39.57 GB| 42.07 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [sonya-medium-x8-moe.Q4_K_M.gguf](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF/blob/main/sonya-medium-x8-moe.Q4_K_M.gguf) | Q4_K_M | 4 | 39.57 GB| 42.07 GB | medium, balanced quality - recommended | | [sonya-medium-x8-moe.Q5_0.gguf](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF/blob/main/sonya-medium-x8-moe.Q5_0.gguf) | Q5_0 | 5 | 48.25 GB| 50.75 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [sonya-medium-x8-moe.Q5_K_M.gguf](https://huggingface.co/TheBloke/sonya-medium-x8-MoE-GGUF/blob/main/sonya-medium-x8-moe.Q5_K_M.gguf) | Q5_K_M | 5 | 48.25 GB| 50.75 GB | large, very low quality loss - recommended | | sonya-medium-x8-moe.Q6_K.gguf | Q6_K | 6 | 57.46 GB| 59.96 GB | very large, extremely low quality loss | | sonya-medium-x8-moe.Q8_0.gguf | Q8_0 | 8 | 74.30 GB| 76.80 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `sonya-medium-x8-moe.Q6_K.gguf-split-a` * `sonya-medium-x8-moe.Q6_K.gguf-split-b` ### q8_0 Please download: * `sonya-medium-x8-moe.Q8_0.gguf-split-a` * `sonya-medium-x8-moe.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat sonya-medium-x8-moe.Q6_K.gguf-split-* > sonya-medium-x8-moe.Q6_K.gguf && rm sonya-medium-x8-moe.Q6_K.gguf-split-* cat sonya-medium-x8-moe.Q8_0.gguf-split-* > sonya-medium-x8-moe.Q8_0.gguf && rm sonya-medium-x8-moe.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B sonya-medium-x8-moe.Q6_K.gguf-split-a + sonya-medium-x8-moe.Q6_K.gguf-split-b sonya-medium-x8-moe.Q6_K.gguf del sonya-medium-x8-moe.Q6_K.gguf-split-a sonya-medium-x8-moe.Q6_K.gguf-split-b COPY /B sonya-medium-x8-moe.Q8_0.gguf-split-a + sonya-medium-x8-moe.Q8_0.gguf-split-b sonya-medium-x8-moe.Q8_0.gguf del sonya-medium-x8-moe.Q8_0.gguf-split-a sonya-medium-x8-moe.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/sonya-medium-x8-MoE-GGUF and below it, a specific filename to download, such as: sonya-medium-x8-moe.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/sonya-medium-x8-MoE-GGUF sonya-medium-x8-moe.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/sonya-medium-x8-MoE-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/sonya-medium-x8-MoE-GGUF sonya-medium-x8-moe.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m sonya-medium-x8-moe.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./sonya-medium-x8-moe.Q4_K_M.gguf", # Download the model file first n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./sonya-medium-x8-moe.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Cross Nastasi's Sonya Medium x8 MoE ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/uTxtkyc1-V6NT_5xfM5Us.png) I present my Magnum Opus of llm merges for 2023. This is a monster of a model created from merging x8 [sonya-medium](https://huggingface.co/dillfrescott/sonya-medium)'s into one mixture of experts. Enjoy! ;) Config: ``` base_model: dillfrescott/sonya-medium gate_mode: hidden dtype: bfloat16 experts: - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] - source_model: dillfrescott/sonya-medium positive_prompts: [""] ``` P.S. Be careful with K quants of this model as they may be glitched! *The cover image is heavily inspired by Muzan Kibutsuji from Demon Slayer.* Example outputs and reasonings. (The following outputs are a q4_0 version so the fully unquantized model would likely perform even better): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/8kl6KXEPyMG7bZZOVz9kv.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/-zxKYEYoT88ffryuVrVEa.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/up005XvFd6bkWxlOq_hBo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/xx5x5SYuOF50DNvo06t_v.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/WOf2jV3Bq2MOVViS58IM6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6215ce9abfcb3893344dd0a2/PYQZGdOqBntakYAcT1DaV.png) <!-- original-model-card end -->
dnoever/Falkor-7b-5.0bpw-exl2
dnoever
2024-01-07T00:06:17Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-07T00:02:47Z
--- license: apache-2.0 --- # Falkor 7B - RAG (dragon) Model <img src="falkor.png" width="300"> Model merge between Chupacabra 7b v2.04 and dragon-mistral-7b-v0 - ---> [Theme Song](https://www.youtube.com/watch?v=lHytjEj7B9g) <--- # Original Model Card for dragon-mistral-7b-v0 <!-- Provide a quick summary of what the model is/does. --> dragon-mistral-7b-v0 part of the dRAGon ("Delivering RAG On ...") model series, RAG-instruct trained on top of a Mistral-7B base model. DRAGON models have been fine-tuned with the specific objective of fact-based question-answering over complex business and legal documents with an emphasis on reducing hallucinations and providing short, clear answers for workflow automation. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **96.50** correct out of 100 --Not Found Classification: 92.50% --Boolean: 97.50% --Math/Logic: 81.25% --Complex Questions (1-5): 4 (Medium-High - table-reading, multiple-choice, causal) --Summarization Quality (1-5): 4 (Coherent, extractive) --Hallucinations: No hallucinations observed in test runs. For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** llmware - **Model type:** Mistral-7B - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Mistral-7B-Base ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> DRAGON is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. DRAGON models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with dRAGon is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("dragon-mistral-7b-v0") model = AutoModelForCausalLM.from_pretrained("dragon-mistral-7b-v0") Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The dRAGon model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} If you are using a HuggingFace generation script: # prepare prompt packaging used in fine-tuning process new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" inputs = tokenizer(new_prompt, return_tensors="pt") start_of_output = len(inputs.input_ids[0]) # temperature: set at 0.3 for consistency of output # max_new_tokens: set at 100 - may prematurely stop a few of the summaries outputs = model.generate( inputs.input_ids.to(device), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100, ) output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) ## Model Card Contact Darren Oberst & llmware team
jeiku/Humiliation_StableLM
jeiku
2024-01-07T00:03:55Z
0
0
null
[ "safetensors", "en", "license:other", "region:us" ]
null
2024-01-07T00:03:03Z
--- license: other language: - en ---
clydelyde/math_books
clydelyde
2024-01-06T23:56:54Z
0
0
peft
[ "peft", "safetensors", "text-generation", "arxiv:1910.09700", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded", "region:us" ]
text-generation
2023-12-15T04:56:30Z
--- library_name: peft base_model: vilsonrodrigues/falcon-7b-instruct-sharded pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.3.dev0
unnu1023/xlm-roberta-large-finetuned-ner
unnu1023
2024-01-06T23:55:55Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-06T21:24:18Z
--- license: mit base_model: xlm-roberta-large tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-large-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-ner This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0541 - Precision: 0.1505 - Recall: 0.0201 - F1: 0.0355 - Accuracy: 0.7304 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.0694 | 0.37 | 7000 | 1.0495 | 0.1505 | 0.0201 | 0.0355 | 0.7304 | | 1.0581 | 0.74 | 14000 | 1.0539 | 0.1505 | 0.0201 | 0.0355 | 0.7304 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Floyd93/Grammar_Summarizer
Floyd93
2024-01-06T23:55:19Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-05T04:48:05Z
--- license: apache-2.0 base_model: google/mt5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: Grammar_Summarizer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Grammar_Summarizer This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5127 - Rouge1: 0.4494 - Rouge2: 0.3672 - Rougel: 0.3833 - Rougelsum: 0.3849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 90 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 2.2799 | 0.25 | 100 | 1.0334 | 0.3916 | 0.3085 | 0.2696 | 0.2717 | | 1.0618 | 0.5 | 200 | 0.6095 | 0.3287 | 0.2746 | 0.2891 | 0.2900 | | 0.8719 | 0.76 | 300 | 0.5127 | 0.4494 | 0.3672 | 0.3833 | 0.3849 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
CyberHarem/kotone_noda_sakuratrick
CyberHarem
2024-01-06T23:43:44Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/kotone_noda_sakuratrick", "license:mit", "region:us" ]
text-to-image
2024-01-06T23:36:09Z
--- license: mit datasets: - CyberHarem/kotone_noda_sakuratrick pipeline_tag: text-to-image tags: - art --- # Lora of kotone_noda_sakuratrick This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3740, you need to download `3740/kotone_noda_sakuratrick.pt` as the embedding and `3740/kotone_noda_sakuratrick.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3740**, with the score of 0.975. The trigger words are: 1. `kotone_noda_sakuratrick` 2. `blush, long_hair, brown_hair, smile, brown_eyes, blonde_hair` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.952 | [Download](5100/kotone_noda_sakuratrick.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | [<NSFW, click to see>](5100/previews/pattern_5.png) | ![pattern_6-5100](5100/previews/pattern_6.png) | [<NSFW, click to see>](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.969 | [Download](4760/kotone_noda_sakuratrick.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | [<NSFW, click to see>](4760/previews/pattern_5.png) | ![pattern_6-4760](4760/previews/pattern_6.png) | [<NSFW, click to see>](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.972 | [Download](4420/kotone_noda_sakuratrick.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | [<NSFW, click to see>](4420/previews/pattern_5.png) | ![pattern_6-4420](4420/previews/pattern_6.png) | [<NSFW, click to see>](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.953 | [Download](4080/kotone_noda_sakuratrick.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | [<NSFW, click to see>](4080/previews/pattern_5.png) | ![pattern_6-4080](4080/previews/pattern_6.png) | [<NSFW, click to see>](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | **3740** | **0.975** | [**Download**](3740/kotone_noda_sakuratrick.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | [<NSFW, click to see>](3740/previews/pattern_5.png) | ![pattern_6-3740](3740/previews/pattern_6.png) | [<NSFW, click to see>](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.902 | [Download](3400/kotone_noda_sakuratrick.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | [<NSFW, click to see>](3400/previews/pattern_5.png) | ![pattern_6-3400](3400/previews/pattern_6.png) | [<NSFW, click to see>](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.963 | [Download](3060/kotone_noda_sakuratrick.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | [<NSFW, click to see>](3060/previews/pattern_5.png) | ![pattern_6-3060](3060/previews/pattern_6.png) | [<NSFW, click to see>](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.895 | [Download](2720/kotone_noda_sakuratrick.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | [<NSFW, click to see>](2720/previews/pattern_5.png) | ![pattern_6-2720](2720/previews/pattern_6.png) | [<NSFW, click to see>](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.957 | [Download](2380/kotone_noda_sakuratrick.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | [<NSFW, click to see>](2380/previews/pattern_5.png) | ![pattern_6-2380](2380/previews/pattern_6.png) | [<NSFW, click to see>](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.888 | [Download](2040/kotone_noda_sakuratrick.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | [<NSFW, click to see>](2040/previews/pattern_5.png) | ![pattern_6-2040](2040/previews/pattern_6.png) | [<NSFW, click to see>](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.902 | [Download](1700/kotone_noda_sakuratrick.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | [<NSFW, click to see>](1700/previews/pattern_5.png) | ![pattern_6-1700](1700/previews/pattern_6.png) | [<NSFW, click to see>](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.902 | [Download](1360/kotone_noda_sakuratrick.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | [<NSFW, click to see>](1360/previews/pattern_5.png) | ![pattern_6-1360](1360/previews/pattern_6.png) | [<NSFW, click to see>](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.917 | [Download](1020/kotone_noda_sakuratrick.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | [<NSFW, click to see>](1020/previews/pattern_5.png) | ![pattern_6-1020](1020/previews/pattern_6.png) | [<NSFW, click to see>](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.812 | [Download](680/kotone_noda_sakuratrick.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | [<NSFW, click to see>](680/previews/pattern_5.png) | ![pattern_6-680](680/previews/pattern_6.png) | [<NSFW, click to see>](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.356 | [Download](340/kotone_noda_sakuratrick.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | [<NSFW, click to see>](340/previews/pattern_5.png) | ![pattern_6-340](340/previews/pattern_6.png) | [<NSFW, click to see>](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
TheBloke/Pallas-0.5-frankenmerge-GPTQ
TheBloke
2024-01-06T23:31:18Z
31
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "base_model:Mihaiii/Pallas-0.5-frankenmerge", "base_model:quantized:Mihaiii/Pallas-0.5-frankenmerge", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-06T20:18:09Z
--- base_model: Mihaiii/Pallas-0.5-frankenmerge inference: false license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license metrics: - accuracy model_creator: Mihai model_name: Pallas 0.5 Frankenmerge model_type: yi prompt_template: 'SYSTEM: {system_message} USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Pallas 0.5 Frankenmerge - GPTQ - Model creator: [Mihai](https://huggingface.co/Mihaiii) - Original model: [Pallas 0.5 Frankenmerge](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge) <!-- description start --> # Description This repo contains GPTQ model files for [Mihai's Pallas 0.5 Frankenmerge](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GGUF) * [Mihai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mihaiii/Pallas-0.5-frankenmerge) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.44 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 20.12 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 22.18 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 15.69 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 37.02 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 17.65 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 37.83 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Pallas-0.5-frankenmerge-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Pallas-0.5-frankenmerge-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Pallas-0.5-frankenmerge-GPTQ`: ```shell mkdir Pallas-0.5-frankenmerge-GPTQ huggingface-cli download TheBloke/Pallas-0.5-frankenmerge-GPTQ --local-dir Pallas-0.5-frankenmerge-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Pallas-0.5-frankenmerge-GPTQ huggingface-cli download TheBloke/Pallas-0.5-frankenmerge-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Pallas-0.5-frankenmerge-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Pallas-0.5-frankenmerge-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pallas-0.5-frankenmerge-GPTQ --local-dir Pallas-0.5-frankenmerge-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Pallas-0.5-frankenmerge-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Pallas-0.5-frankenmerge-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Pallas-0.5-frankenmerge-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Pallas-0.5-frankenmerge-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Pallas-0.5-frankenmerge-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Pallas-0.5-frankenmerge-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''SYSTEM: {system_message} USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Mihai's Pallas 0.5 Frankenmerge This is a frankenmerge of [Mihaiii/Pallas-0.5](https://huggingface.co/Mihaiii/Pallas-0.5) . It was done using [mergekit](https://github.com/cg123/mergekit). It works well with long system prompts. It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension. This model is trained on a private dataset. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` Merge config: ```yaml slices: - sources: - model: "Mihaiii/Pallas-0.5" layer_range: [0, 60] - sources: - model: "Mihaiii/Pallas-0.5" layer_range: [58, 60] - sources: - model: "Mihaiii/Pallas-0.5" layer_range: [55, 56] merge_method: passthrough dtype: bfloat16 ```
jeiku/No_Robots_Alpaca_StableLM
jeiku
2024-01-06T23:13:44Z
0
1
null
[ "safetensors", "en", "dataset:AdamCodd/no_robots-alpaca", "license:other", "region:us" ]
null
2024-01-06T23:11:32Z
--- license: other datasets: - AdamCodd/no_robots-alpaca language: - en ---
TheBloke/Sensualize-Solar-10.7B-AWQ
TheBloke
2024-01-06T23:11:15Z
8
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "base_model:Sao10K/Sensualize-Solar-10.7B", "base_model:quantized:Sao10K/Sensualize-Solar-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-06T22:53:40Z
--- base_model: Sao10K/Sensualize-Solar-10.7B inference: false language: - en license: cc-by-nc-4.0 model_creator: Saofiq model_name: Sensualize Solar 10.7B model_type: solar prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Sensualize Solar 10.7B - AWQ - Model creator: [Saofiq](https://huggingface.co/Sao10K) - Original model: [Sensualize Solar 10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) <!-- description start --> ## Description This repo contains AWQ model files for [Saofiq's Sensualize Solar 10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-GGUF) * [Saofiq's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Sensualize-Solar-10.7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 5.96 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Sensualize-Solar-10.7B-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Sensualize-Solar-10.7B-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/Sensualize-Solar-10.7B-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Sensualize-Solar-10.7B-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Sensualize-Solar-10.7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/Sensualize-Solar-10.7B-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Saofiq's Sensualize Solar 10.7B A finetune of Base Solar. Took 12 Hours or so on 2x RTX 6000 ADAs, this is an 8-bit LoRA. This is meant to be a verbose, smart ERP model. Experimental. *** ### Prompt Format: Alpaca ``` ### Instruction: <Prompt> ### Input: <Insert Context Here> ### Response: ```
ntc-ai/SDXL-LoRA-slider.extremely-cozy
ntc-ai
2024-01-06T23:08:59Z
26
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-06T23:08:56Z
--- language: - en thumbnail: "images/evaluate/extremely cozy.../extremely cozy_17_3.0.png" widget: - text: extremely cozy output: url: images/extremely cozy_17_3.0.png - text: extremely cozy output: url: images/extremely cozy_19_3.0.png - text: extremely cozy output: url: images/extremely cozy_20_3.0.png - text: extremely cozy output: url: images/extremely cozy_21_3.0.png - text: extremely cozy output: url: images/extremely cozy_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "extremely cozy" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - extremely cozy (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/extremely cozy_17_-3.0.png" width=256 height=256 /> | <img src="images/extremely cozy_17_0.0.png" width=256 height=256 /> | <img src="images/extremely cozy_17_3.0.png" width=256 height=256 /> | | <img src="images/extremely cozy_19_-3.0.png" width=256 height=256 /> | <img src="images/extremely cozy_19_0.0.png" width=256 height=256 /> | <img src="images/extremely cozy_19_3.0.png" width=256 height=256 /> | | <img src="images/extremely cozy_20_-3.0.png" width=256 height=256 /> | <img src="images/extremely cozy_20_0.0.png" width=256 height=256 /> | <img src="images/extremely cozy_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` extremely cozy ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.extremely-cozy', weight_name='extremely cozy.safetensors', adapter_name="extremely cozy") # Activate the LoRA pipe.set_adapters(["extremely cozy"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, extremely cozy" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 910+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
air217/distilhubert-finetuned-gtzan
air217
2024-01-06T22:57:36Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-04T19:52:49Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan model-index: - name: distilhubert-finetuned-gtzan results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
TheBloke/zephyr-quiklang-3b-4K-GPTQ
TheBloke
2024-01-06T22:52:51Z
30
2
transformers
[ "transformers", "safetensors", "stablelm_epoch", "feature-extraction", "causal_lm", "text-generation", "conversational", "custom_code", "dataset:teknium/openhermes", "base_model:Walmart-the-bag/zephyr-quiklang-3b-4K", "base_model:quantized:Walmart-the-bag/zephyr-quiklang-3b-4K", "license:other", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-06T22:39:21Z
--- base_model: Walmart-the-bag/zephyr-quiklang-3b-4K datasets: - teknium/openhermes inference: false license: other model_creator: wbag model_name: Zephyr Quiklang 3B 4K model_type: stablelm_epoch pipeline_tag: text-generation prompt_template: '<|system|> {system_message}</s> <|user|> {prompt}</s> <|assistant|> ' quantized_by: TheBloke tags: - causal_lm --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Zephyr Quiklang 3B 4K - GPTQ - Model creator: [wbag](https://huggingface.co/Walmart-the-bag) - Original model: [Zephyr Quiklang 3B 4K](https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b-4K) <!-- description start --> # Description This repo contains GPTQ model files for [wbag's Zephyr Quiklang 3B 4K](https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b-4K). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GGUF) * [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b-4K) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Zephyr ``` <|system|> {system_message}</s> <|user|> {prompt}</s> <|assistant|> ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 1.84 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 1.99 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.06 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.12 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.30 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 1.89 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/zephyr-quiklang-3b-4K-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/zephyr-quiklang-3b-4K-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `zephyr-quiklang-3b-4K-GPTQ`: ```shell mkdir zephyr-quiklang-3b-4K-GPTQ huggingface-cli download TheBloke/zephyr-quiklang-3b-4K-GPTQ --local-dir zephyr-quiklang-3b-4K-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir zephyr-quiklang-3b-4K-GPTQ huggingface-cli download TheBloke/zephyr-quiklang-3b-4K-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir zephyr-quiklang-3b-4K-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir zephyr-quiklang-3b-4K-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/zephyr-quiklang-3b-4K-GPTQ --local-dir zephyr-quiklang-3b-4K-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/zephyr-quiklang-3b-4K-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/zephyr-quiklang-3b-4K-GPTQ`. - To download from a specific branch, enter for example `TheBloke/zephyr-quiklang-3b-4K-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `zephyr-quiklang-3b-4K-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/zephyr-quiklang-3b-4K-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''<|system|> {system_message}</s> <|user|> {prompt}</s> <|assistant|> ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/zephyr-quiklang-3b-4K-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''<|system|> {system_message}</s> <|user|> {prompt}</s> <|assistant|> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: wbag's Zephyr Quiklang 3B 4K # Description This is the 4K version of https://huggingface.co/Walmart-the-bag/zephyr-quiklang-3b with 1000 more samples of openhermes. # Original Model Description This is a finetune of [StableLM-Zephyr-3B](https://huggingface.co/stabilityai/stablelm-zephyr-3b) with 2 datasets, toxic-dpo and openhermes with 10000 samples. # Training Parameters - 1xA6000-48GB - batch_size: 6 - learning_rate: 5e-5 # Datasets: - unalignment/toxic-dpo-v0.1 - teknium/openhermes # Metrics/Basic Eval: "predict_bleu-4": 31.594154999999997, "predict_rouge-1": 44.092935, "predict_rouge-2": 22.276081000000005, "predict_rouge-l": 34.506909, "predict_runtime": 121.7549, "predict_samples_per_second": 0.821, "predict_steps_per_second": 0.107
TheBloke/bagel-8x7b-v0.2-GPTQ
TheBloke
2024-01-06T22:25:26Z
27
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "dataset:ai2_arc", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:boolq", "dataset:jondurbin/cinematika-v0.1", "dataset:drop", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:cais/mmlu", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:spider", "dataset:squad_v2", "dataset:migtissera/Synthia-v1.3", "dataset:datasets/winogrande", "dataset:nvidia/HelpSteer", "dataset:Intel/orca_dpo_pairs", "dataset:unalignment/toxic-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned", "dataset:LDJnr/Capybara", "dataset:JULIELab/EmoBank", "dataset:kingbri/PIPPA-shareGPT", "base_model:jondurbin/bagel-8x7b-v0.2", "base_model:quantized:jondurbin/bagel-8x7b-v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-06T20:27:52Z
--- base_model: jondurbin/bagel-8x7b-v0.2 datasets: - ai2_arc - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - boolq - jondurbin/cinematika-v0.1 - drop - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - cais/mmlu - Muennighoff/natural-instructions - openbookqa - piqa - Vezora/Tested-22k-Python-Alpaca - cakiki/rosetta-code - Open-Orca/SlimOrca - spider - squad_v2 - migtissera/Synthia-v1.3 - datasets/winogrande - nvidia/HelpSteer - Intel/orca_dpo_pairs - unalignment/toxic-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - allenai/ultrafeedback_binarized_cleaned - Squish42/bluemoon-fandom-1-1-rp-cleaned - LDJnr/Capybara - JULIELab/EmoBank - kingbri/PIPPA-shareGPT inference: false license: apache-2.0 model_creator: Jon Durbin model_name: Bagel 8X7B v0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Bagel 8X7B v0.2 - GPTQ - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2) <!-- description start --> # Description This repo contains GPTQ model files for [Jon Durbin's Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/bagel-8x7b-v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.01 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/bagel-8x7b-v0.2-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/bagel-8x7b-v0.2-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `bagel-8x7b-v0.2-GPTQ`: ```shell mkdir bagel-8x7b-v0.2-GPTQ huggingface-cli download TheBloke/bagel-8x7b-v0.2-GPTQ --local-dir bagel-8x7b-v0.2-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir bagel-8x7b-v0.2-GPTQ huggingface-cli download TheBloke/bagel-8x7b-v0.2-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir bagel-8x7b-v0.2-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir bagel-8x7b-v0.2-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/bagel-8x7b-v0.2-GPTQ --local-dir bagel-8x7b-v0.2-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/bagel-8x7b-v0.2-GPTQ`. - To download from a specific branch, enter for example `TheBloke/bagel-8x7b-v0.2-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `bagel-8x7b-v0.2-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/bagel-8x7b-v0.2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/bagel-8x7b-v0.2-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Jon Durbin's Bagel 8X7B v0.2 # A bagel, with everything (except DPO) ![bagel](bagel.png) ## Overview An experimental fine-tune of [mixtral-8x7b-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [bagel](https://github.com/jondurbin/bagel) This is the model after the SFT phase, before DPO has been applied. Hardware kindly provided by [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) ### Data sources *Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check* - [ai2_arc](https://huggingface.co/datasets/ai2_arc) - Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent. - [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1) - Variety of categories of synthetic instructions generated by gpt-4. - [apps](https://huggingface.co/datasets/codeparrot/apps) - Python coding dataset with 10k problems. - [belebele](https://huggingface.co/datasets/facebook/belebele) - Multi-lingual reading comprehension dataset. - [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT. - [boolq](https://huggingface.co/datasets/boolq) - Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?) - [capybara](https://huggingface.co/datasets/LDJnr/Capybara) - Multi-turn dataset used to create the capybara models. - [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text) - RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be. - [drop](https://huggingface.co/datasets/drop) - More reading comprehension. - [emobank](https://github.com/JULIELab/EmoBank) - Emotion annotations using the Valence-Arousal-Domninance scheme. - [gutenberg](https://www.gutenberg.org/) (plain text) - Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize) - [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO) - Chats collected by the lmsys chat arena, containing a wide variety of chats with various models. - [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) - Composite dataset with a variety of math-related tasks and problem/question formats. - [mmlu](https://huggingface.co/datasets/cais/mmlu) - Massive Multitask Language Understanding - a wide variety of questions about various subject matters. - [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions) - Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type) - [openbookqa](https://huggingface.co/datasets/openbookqa) - Question answering dataset. - [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT) - Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format. - [piqa](https://huggingface.co/datasets/piqa) - Phyiscal interaction question answering. - [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca) - Python instruction response pairs, validated as functional. - [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code) - Code problems and solutions in a variety of programming languages taken from rosettacode.org. - [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca) - Collection of ~500k gpt-4 verified chats from OpenOrca. - [spider](https://huggingface.co/datasets/spider) - SQL-targeted dataset. - [squad_v2](https://huggingface.co/datasets/squad_v2) - Contextual question answering (RAG). - [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3) - GPT-4 generated data using advanced prompting from Migel Tissera. - [winogrande](https://huggingface.co/datasets/winogrande) - Fill in the blank style prompts. Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss). ## How to easily download and use this model [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI. 1) For this model rent the [Jon Durbin 4xA6000](https://shop.massedcompute.com/products/jon-durbin-4x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine 2) After you start your rental you will receive an email with instructions on how to Login to the VM 3) Once inside the VM, open the terminal and run `conda activate text-generation-inference` 4) Then `cd Desktop/text-generation-inference/` 5) Run `volume=$PWD/data` 6) Run`model=jondurbin/bagel-8x7b-v0.2` 7) `sudo docker run --gpus '"device=0,1,2,3"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model` 8) The model will take some time to load... 9) Once loaded the model will be available on port 8080 Sample command within the VM ``` curl 0.0.0.0:8080/generate \ -X POST \ -d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\ -H 'Content-Type: application/json' ``` You can also access the model from outside the VM ``` curl IP_ADDRESS_PROVIDED_BY_MASSED_COMPUTE_VM:8080/generate \ -X POST \ -d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\ -H 'Content-Type: application/json ``` For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA) ## Prompt formatting In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta). I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format. This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate. ### Alpaca (sort of) ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {system prompt, if provided} {instruction} ### Response: ``` The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section. ### Vicuna ``` {system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."} USER: {instruction} ASSISTANT: ``` ### ChatML (sort of) I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong). So, instead of: ```text {bos}<|im_start|>{role} {text} <|im_end|>{eos} ``` I just changed it to: ```text {bos}{role} {text} {eos} ``` If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune. ### Llama-2 chat ``` [INST] <<SYS>> {system} <</SYS>> {instruction} [/INST] ``` ### Default via chat template The model's `tokenizer_config.json` includes the default chat template (llama-2), so you can simply use the `apply_chat_template` method to build the full prompt. ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/bagel-8x7b-v0.2') chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` ### Contribute If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details. To help me with the fine-tuning costs (which are extremely expensive for these large combined datasets): - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Guide for certain tasks #### RA(G)/contextual question answering The model was trained to ignore what it thinks it knows, and uses the context to answer the questions, when using the format below. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a contextual prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Fine-tuning information You can find charts, and the full configuration used to fine-tune this model on [weights and biases](https://wandb.ai/jondurbin/bagel-8x7b-v0.2/runs/agxjjdso?workspace=user-jondurbin) The model was fine-tuned on an 8x a6000 instance, for 4 days, 15 hours, 6 minutes and 42 seconds. ### Licence and usage restrictions The base model is mixtral-8x7b-v0.1, which is licensed as apache-2.0 - no issues there. The fine-tuning data, however, includes several datasets that have data generated at least in part by OpenAI's gpt-4. I am not a lawyer, so I can't help determine if this is actually commercially viable, but some questions that often come up are: - Does the OpenAI ToS apply only to the user who created the dataset initially, and not subsequent models? - If the dataset was released under a permissive license, but actually includes OpenAI generated data, does that ToS supersede the license? - Does the dataset fall completely under fair use anyways, since the model isn't really capable of reproducing the entire training set verbatim? Use your best judgement and seek legal advice if you are concerned about the terms. In any case, by using this model, you agree to completely indemnify me.
Felladrin/onnx-Pythia-31M-Chat-v1
Felladrin
2024-01-06T22:24:11Z
9
0
transformers.js
[ "transformers.js", "onnx", "gpt_neox", "text-generation", "conversational", "en", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:databricks/databricks-dolly-15k", "dataset:THUDM/webglm-qa", "dataset:starfishmedical/webGPT_x_dolly", "dataset:Amod/mental_health_counseling_conversations", "dataset:sablo/oasst2_curated", "dataset:cognitivecomputations/wizard_vicuna_70k_unfiltered", "dataset:mlabonne/chatml_dpo_pairs", "base_model:Felladrin/Pythia-31M-Chat-v1", "base_model:quantized:Felladrin/Pythia-31M-Chat-v1", "license:apache-2.0", "region:us" ]
text-generation
2024-01-06T17:11:24Z
--- license: apache-2.0 library_name: "transformers.js" base_model: Felladrin/Pythia-31M-Chat-v1 language: - en datasets: - totally-not-an-llm/EverythingLM-data-V3 - databricks/databricks-dolly-15k - THUDM/webglm-qa - starfishmedical/webGPT_x_dolly - Amod/mental_health_counseling_conversations - sablo/oasst2_curated - cognitivecomputations/wizard_vicuna_70k_unfiltered - mlabonne/chatml_dpo_pairs --- INT8 ONNX version of [Felladrin/Pythia-31M-Chat-v1](https://huggingface.co/Felladrin/Pythia-31M-Chat-v1) to use with [Transformers.js](https://huggingface.co/docs/transformers.js).
s3nh/s3nh-Medicine-Noromaid-13b-GGUF
s3nh
2024-01-06T22:21:11Z
69
1
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T18:26:55Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/s3nh/Medicine-Noromaid-13b). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference TODO # Original model card
s3nh/s3nh-Law-Noromaid-13b-GGUF
s3nh
2024-01-06T22:21:02Z
11
2
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T18:26:43Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/s3nh/Law-Noromaid-13b). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference TODO # Original model card
alirzb/S5_M1_fold5_BEiT_42621849
alirzb
2024-01-06T22:15:43Z
147
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T20:50:24Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S5_M1_fold5_BEiT_42621849 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S5_M1_fold5_BEiT_42621849 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0023 | 1.0 | 368 | 0.0052 | 0.9984 | | 0.0201 | 2.0 | 737 | 0.0208 | 0.9952 | | 0.0 | 3.0 | 1105 | 0.0257 | 0.9936 | | 0.0007 | 4.0 | 1474 | 0.0005 | 1.0 | | 0.0001 | 4.99 | 1840 | 0.0001 | 1.0 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
Boulou2107/comic-name-classification
Boulou2107
2024-01-06T22:08:46Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "bert", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:adapter:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "region:us" ]
null
2024-01-06T21:00:19Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: bert-base-multilingual-cased model-index: - name: comic-name-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # comic-name-classification This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0448 - Accuracy: 0.9937 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000125 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 25 | 0.0317 | 0.9933 | | No log | 2.0 | 50 | 0.0342 | 0.9933 | | No log | 3.0 | 75 | 0.0339 | 0.9933 | | No log | 4.0 | 100 | 0.0361 | 0.9941 | | No log | 5.0 | 125 | 0.0367 | 0.9945 | | No log | 6.0 | 150 | 0.0372 | 0.9941 | | No log | 7.0 | 175 | 0.0388 | 0.9945 | | No log | 8.0 | 200 | 0.0365 | 0.9941 | | No log | 9.0 | 225 | 0.0359 | 0.9941 | | No log | 10.0 | 250 | 0.0385 | 0.9941 | | No log | 11.0 | 275 | 0.0380 | 0.9941 | | No log | 12.0 | 300 | 0.0394 | 0.9937 | | No log | 13.0 | 325 | 0.0389 | 0.9941 | | No log | 14.0 | 350 | 0.0398 | 0.9941 | | No log | 15.0 | 375 | 0.0398 | 0.9937 | | No log | 16.0 | 400 | 0.0399 | 0.9941 | | No log | 17.0 | 425 | 0.0419 | 0.9941 | | No log | 18.0 | 450 | 0.0409 | 0.9941 | | No log | 19.0 | 475 | 0.0415 | 0.9937 | | 0.0037 | 20.0 | 500 | 0.0418 | 0.9941 | | 0.0037 | 21.0 | 525 | 0.0430 | 0.9941 | | 0.0037 | 22.0 | 550 | 0.0419 | 0.9941 | | 0.0037 | 23.0 | 575 | 0.0434 | 0.9941 | | 0.0037 | 24.0 | 600 | 0.0443 | 0.9941 | | 0.0037 | 25.0 | 625 | 0.0447 | 0.9941 | | 0.0037 | 26.0 | 650 | 0.0444 | 0.9937 | | 0.0037 | 27.0 | 675 | 0.0438 | 0.9937 | | 0.0037 | 28.0 | 700 | 0.0431 | 0.9941 | | 0.0037 | 29.0 | 725 | 0.0426 | 0.9941 | | 0.0037 | 30.0 | 750 | 0.0434 | 0.9941 | | 0.0037 | 31.0 | 775 | 0.0442 | 0.9941 | | 0.0037 | 32.0 | 800 | 0.0423 | 0.9941 | | 0.0037 | 33.0 | 825 | 0.0423 | 0.9941 | | 0.0037 | 34.0 | 850 | 0.0419 | 0.9941 | | 0.0037 | 35.0 | 875 | 0.0422 | 0.9941 | | 0.0037 | 36.0 | 900 | 0.0433 | 0.9941 | | 0.0037 | 37.0 | 925 | 0.0434 | 0.9941 | | 0.0037 | 38.0 | 950 | 0.0443 | 0.9941 | | 0.0037 | 39.0 | 975 | 0.0449 | 0.9937 | | 0.002 | 40.0 | 1000 | 0.0452 | 0.9937 | | 0.002 | 41.0 | 1025 | 0.0459 | 0.9941 | | 0.002 | 42.0 | 1050 | 0.0463 | 0.9941 | | 0.002 | 43.0 | 1075 | 0.0449 | 0.9937 | | 0.002 | 44.0 | 1100 | 0.0443 | 0.9941 | | 0.002 | 45.0 | 1125 | 0.0442 | 0.9941 | | 0.002 | 46.0 | 1150 | 0.0445 | 0.9941 | | 0.002 | 47.0 | 1175 | 0.0446 | 0.9941 | | 0.002 | 48.0 | 1200 | 0.0447 | 0.9937 | | 0.002 | 49.0 | 1225 | 0.0448 | 0.9937 | | 0.002 | 50.0 | 1250 | 0.0448 | 0.9937 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
kanishka/smolm-autoreg-bpe-counterfactual-babylm-adj_num_freq_balanced-seed_1024-1e-4
kanishka
2024-01-06T21:42:43Z
7
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T23:09:46Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-autoreg-bpe-counterfactual-babylm-adj_num_freq_balanced-seed_1024-1e-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-counterfactual-babylm-adj_num_freq_balanced-seed_1024-1e-4 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4789 - Accuracy: 0.4021 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 4.0508 | 1.0 | 18629 | 4.2865 | 0.3074 | | 3.5631 | 2.0 | 37258 | 3.7670 | 0.3600 | | 3.387 | 3.0 | 55887 | 3.6088 | 0.3779 | | 3.2838 | 4.0 | 74516 | 3.5356 | 0.3856 | | 3.2141 | 5.0 | 93145 | 3.4872 | 0.3914 | | 3.1605 | 6.0 | 111774 | 3.4762 | 0.3930 | | 3.1134 | 7.0 | 130403 | 3.4605 | 0.3958 | | 3.0807 | 8.0 | 149032 | 3.4343 | 0.3978 | | 3.0435 | 9.0 | 167661 | 3.4517 | 0.3985 | | 3.017 | 10.0 | 186290 | 3.4361 | 0.4001 | | 2.9929 | 11.0 | 204919 | 3.4535 | 0.4003 | | 2.9723 | 12.0 | 223548 | 3.4509 | 0.4010 | | 2.9477 | 13.0 | 242177 | 3.4610 | 0.4011 | | 2.9248 | 14.0 | 260806 | 3.4676 | 0.4013 | | 2.901 | 15.0 | 279435 | 3.4595 | 0.4016 | | 2.8845 | 16.0 | 298064 | 3.4641 | 0.4017 | | 2.8637 | 17.0 | 316693 | 3.4624 | 0.4018 | | 2.851 | 18.0 | 335322 | 3.4731 | 0.4018 | | 2.8298 | 19.0 | 353951 | 3.4763 | 0.4020 | | 2.817 | 20.0 | 372580 | 3.4789 | 0.4021 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.14.1
yy0514/llama2-7b-qlora-lek-train-4-epochs
yy0514
2024-01-06T21:36:25Z
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-06T20:46:04Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: llama2-7b-qlora-lek-train-4-epochs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7b-qlora-lek-train-4-epochs This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
kanishka/smolm-autoreg-bpe-counterfactual-babylm-indef-naan-rerun-seed_1024-1e-3
kanishka
2024-01-06T21:22:44Z
6
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T22:54:03Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: smolm-autoreg-bpe-counterfactual-babylm-indef-naan-rerun-seed_1024-1e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smolm-autoreg-bpe-counterfactual-babylm-indef-naan-rerun-seed_1024-1e-3 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4016 - Accuracy: 0.4111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.5989 | 1.0 | 18595 | 3.7872 | 0.3601 | | 3.3881 | 2.0 | 37190 | 3.5672 | 0.3809 | | 3.2548 | 3.0 | 55785 | 3.4935 | 0.3930 | | 3.1777 | 4.0 | 74380 | 3.4229 | 0.3988 | | 3.12 | 5.0 | 92975 | 3.4040 | 0.4021 | | 3.0819 | 6.0 | 111570 | 3.3716 | 0.4050 | | 3.0441 | 7.0 | 130165 | 3.3507 | 0.4065 | | 3.014 | 8.0 | 148760 | 3.3530 | 0.4076 | | 2.9831 | 9.0 | 167355 | 3.3354 | 0.4096 | | 2.9561 | 10.0 | 185950 | 3.3654 | 0.4080 | | 2.9377 | 11.0 | 204545 | 3.3576 | 0.4101 | | 2.9146 | 12.0 | 223140 | 3.3649 | 0.4106 | | 2.8927 | 13.0 | 241735 | 3.3646 | 0.4105 | | 2.8718 | 14.0 | 260330 | 3.3591 | 0.4108 | | 2.8521 | 15.0 | 278925 | 3.3636 | 0.4114 | | 2.8348 | 16.0 | 297520 | 3.3807 | 0.4111 | | 2.8131 | 17.0 | 316115 | 3.3772 | 0.4109 | | 2.7921 | 18.0 | 334710 | 3.3874 | 0.4110 | | 2.7743 | 19.0 | 353305 | 3.3928 | 0.4112 | | 2.7615 | 20.0 | 371900 | 3.4016 | 0.4111 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.14.1
gagan3012/MetaModel_arabic
gagan3012
2024-01-06T21:20:30Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "MetaModel_arabic", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-05T23:26:22Z
--- license: apache-2.0 tags: - MetaModel_arabic --- # MetaModel_arabic This model is a hybrid of the following models and is trained using the following configuration: * [FreedomIntelligence/AceGPT-7B](https://huggingface.co/FreedomIntelligence/AceGPT-7B) * [FreedomIntelligence/AceGPT-7B](https://huggingface.co/FreedomIntelligence/AceGPT-7B)
thierryteisseire/TinyLlama-1.1B-Chat-v1.0-fine-tuned
thierryteisseire
2024-01-06T21:19:17Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
2024-01-06T17:00:59Z
--- library_name: peft base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2
bartowski
2024-01-06T21:00:48Z
0
1
null
[ "mistral", "instruct", "finetune", "chatml", "gpt4", "synthetic data", "distillation", "dpo", "rlhf", "laser", "text-generation", "en", "dataset:mlabonne/chatml_dpo_pairs", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
text-generation
2024-01-06T20:41:25Z
--- base_model: teknium/OpenHermes-2.5-Mistral-7B tags: - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation - dpo - rlhf - laser license: apache-2.0 language: - en datasets: - mlabonne/chatml_dpo_pairs quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of NeuralHermes-2.5-Mistral-7B-laser Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization. ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Conversion was done using the default calibration dataset. Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6. Original model: https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B-laser ## Download instructions With git: ```shell git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralHermes-2.5-Mistral-7B-laser-exl2`: ```shell mkdir NeuralHermes-2.5-Mistral-7B-laser-exl2 huggingface-cli download bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2 --local-dir NeuralHermes-2.5-Mistral-7B-laser-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir NeuralHermes-2.5-Mistral-7B-laser-exl2 huggingface-cli download bartowski/NeuralHermes-2.5-Mistral-7B-laser-exl2 --revision 4_0 --local-dir NeuralHermes-2.5-Mistral-7B-laser-exl2 --local-dir-use-symlinks False ```
TheBloke/bagel-8x7b-v0.2-AWQ
TheBloke
2024-01-06T20:58:52Z
10
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "dataset:ai2_arc", "dataset:jondurbin/airoboros-3.2", "dataset:codeparrot/apps", "dataset:facebook/belebele", "dataset:boolq", "dataset:jondurbin/cinematika-v0.1", "dataset:drop", "dataset:lmsys/lmsys-chat-1m", "dataset:TIGER-Lab/MathInstruct", "dataset:cais/mmlu", "dataset:Muennighoff/natural-instructions", "dataset:openbookqa", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:cakiki/rosetta-code", "dataset:Open-Orca/SlimOrca", "dataset:spider", "dataset:squad_v2", "dataset:migtissera/Synthia-v1.3", "dataset:datasets/winogrande", "dataset:nvidia/HelpSteer", "dataset:Intel/orca_dpo_pairs", "dataset:unalignment/toxic-dpo-v0.1", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:allenai/ultrafeedback_binarized_cleaned", "dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned", "dataset:LDJnr/Capybara", "dataset:JULIELab/EmoBank", "dataset:kingbri/PIPPA-shareGPT", "base_model:jondurbin/bagel-8x7b-v0.2", "base_model:quantized:jondurbin/bagel-8x7b-v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-01-06T20:27:52Z
--- base_model: jondurbin/bagel-8x7b-v0.2 datasets: - ai2_arc - jondurbin/airoboros-3.2 - codeparrot/apps - facebook/belebele - boolq - jondurbin/cinematika-v0.1 - drop - lmsys/lmsys-chat-1m - TIGER-Lab/MathInstruct - cais/mmlu - Muennighoff/natural-instructions - openbookqa - piqa - Vezora/Tested-22k-Python-Alpaca - cakiki/rosetta-code - Open-Orca/SlimOrca - spider - squad_v2 - migtissera/Synthia-v1.3 - datasets/winogrande - nvidia/HelpSteer - Intel/orca_dpo_pairs - unalignment/toxic-dpo-v0.1 - jondurbin/truthy-dpo-v0.1 - allenai/ultrafeedback_binarized_cleaned - Squish42/bluemoon-fandom-1-1-rp-cleaned - LDJnr/Capybara - JULIELab/EmoBank - kingbri/PIPPA-shareGPT inference: false license: apache-2.0 model_creator: Jon Durbin model_name: Bagel 8X7B v0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Bagel 8X7B v0.2 - AWQ - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2) <!-- description start --> ## Description This repo contains AWQ model files for [Jon Durbin's Bagel 8X7B v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2). **MIXTRAL AWQ** This is a Mixtral AWQ model. For AutoAWQ inference, please install AutoAWQ 0.1.8 or later. Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git` vLLM: version 0.2.6 is confirmed to support Mixtral AWQs. TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. AWQ models are supported by (note that not all of these may support Mixtral models yet - see above): - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/bagel-8x7b-v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/bagel-8x7b-v0.2-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.65 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/bagel-8x7b-v0.2-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `bagel-8x7b-v0.2-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/bagel-8x7b-v0.2-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/bagel-8x7b-v0.2-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/bagel-8x7b-v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/bagel-8x7b-v0.2-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Jon Durbin's Bagel 8X7B v0.2 # A bagel, with everything (except DPO) ![bagel](bagel.png) ## Overview An experimental fine-tune of [mixtral-8x7b-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [bagel](https://github.com/jondurbin/bagel) This is the model after the SFT phase, before DPO has been applied. Hardware kindly provided by [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) ### Data sources *Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check* - [ai2_arc](https://huggingface.co/datasets/ai2_arc) - Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent. - [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1) - Variety of categories of synthetic instructions generated by gpt-4. - [apps](https://huggingface.co/datasets/codeparrot/apps) - Python coding dataset with 10k problems. - [belebele](https://huggingface.co/datasets/facebook/belebele) - Multi-lingual reading comprehension dataset. - [bluemoon](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - Roleplay data scraped from Bluemoon, then cleaned and formatted as ShareGPT. - [boolq](https://huggingface.co/datasets/boolq) - Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?) - [capybara](https://huggingface.co/datasets/LDJnr/Capybara) - Multi-turn dataset used to create the capybara models. - [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text) - RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be. - [drop](https://huggingface.co/datasets/drop) - More reading comprehension. - [emobank](https://github.com/JULIELab/EmoBank) - Emotion annotations using the Valence-Arousal-Domninance scheme. - [gutenberg](https://www.gutenberg.org/) (plain text) - Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize) - [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO) - Chats collected by the lmsys chat arena, containing a wide variety of chats with various models. - [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) - Composite dataset with a variety of math-related tasks and problem/question formats. - [mmlu](https://huggingface.co/datasets/cais/mmlu) - Massive Multitask Language Understanding - a wide variety of questions about various subject matters. - [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions) - Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type) - [openbookqa](https://huggingface.co/datasets/openbookqa) - Question answering dataset. - [pippa](https://huggingface.co/datasets/kingbri/PIPPA-shareGPT) - Deduped version of [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) in ShareGPT format. - [piqa](https://huggingface.co/datasets/piqa) - Phyiscal interaction question answering. - [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca) - Python instruction response pairs, validated as functional. - [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code) - Code problems and solutions in a variety of programming languages taken from rosettacode.org. - [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca) - Collection of ~500k gpt-4 verified chats from OpenOrca. - [spider](https://huggingface.co/datasets/spider) - SQL-targeted dataset. - [squad_v2](https://huggingface.co/datasets/squad_v2) - Contextual question answering (RAG). - [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3) - GPT-4 generated data using advanced prompting from Migel Tissera. - [winogrande](https://huggingface.co/datasets/winogrande) - Fill in the blank style prompts. Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss). ## How to easily download and use this model [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI. 1) For this model rent the [Jon Durbin 4xA6000](https://shop.massedcompute.com/products/jon-durbin-4x-a6000?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) Virtual Machine 2) After you start your rental you will receive an email with instructions on how to Login to the VM 3) Once inside the VM, open the terminal and run `conda activate text-generation-inference` 4) Then `cd Desktop/text-generation-inference/` 5) Run `volume=$PWD/data` 6) Run`model=jondurbin/bagel-8x7b-v0.2` 7) `sudo docker run --gpus '"device=0,1,2,3"' --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model` 8) The model will take some time to load... 9) Once loaded the model will be available on port 8080 Sample command within the VM ``` curl 0.0.0.0:8080/generate \ -X POST \ -d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\ -H 'Content-Type: application/json' ``` You can also access the model from outside the VM ``` curl IP_ADDRESS_PROVIDED_BY_MASSED_COMPUTE_VM:8080/generate \ -X POST \ -d '{"inputs":"<|system|>You are a friendly chatbot.\n<|user|>What type of model are you?\n<|assistant|>","parameters":{"do_sample": true, "max_new_tokens": 100, "repetition_penalty": 1.15, "temperature": 0.7, "top_k": 20, "top_p": 0.9, "best_of": 1}}'\ -H 'Content-Type: application/json ``` For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA) ## Prompt formatting In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta). I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format. This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate. ### Alpaca (sort of) ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {system prompt, if provided} {instruction} ### Response: ``` The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section. ### Vicuna ``` {system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."} USER: {instruction} ASSISTANT: ``` ### ChatML (sort of) I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong). So, instead of: ```text {bos}<|im_start|>{role} {text} <|im_end|>{eos} ``` I just changed it to: ```text {bos}{role} {text} {eos} ``` If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune. ### Llama-2 chat ``` [INST] <<SYS>> {system} <</SYS>> {instruction} [/INST] ``` ### Default via chat template The model's `tokenizer_config.json` includes the default chat template (llama-2), so you can simply use the `apply_chat_template` method to build the full prompt. ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/bagel-8x7b-v0.2') chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` ### Contribute If you're interested in new functionality/datasets, take a look at [bagel repo](https://github.com/jondurbin/bagel) and either make a PR or open an issue with details. To help me with the fine-tuning costs (which are extremely expensive for these large combined datasets): - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Guide for certain tasks #### RA(G)/contextual question answering The model was trained to ignore what it thinks it knows, and uses the context to answer the questions, when using the format below. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a contextual prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Fine-tuning information You can find charts, and the full configuration used to fine-tune this model on [weights and biases](https://wandb.ai/jondurbin/bagel-8x7b-v0.2/runs/agxjjdso?workspace=user-jondurbin) The model was fine-tuned on an 8x a6000 instance, for 4 days, 15 hours, 6 minutes and 42 seconds. ### Licence and usage restrictions The base model is mixtral-8x7b-v0.1, which is licensed as apache-2.0 - no issues there. The fine-tuning data, however, includes several datasets that have data generated at least in part by OpenAI's gpt-4. I am not a lawyer, so I can't help determine if this is actually commercially viable, but some questions that often come up are: - Does the OpenAI ToS apply only to the user who created the dataset initially, and not subsequent models? - If the dataset was released under a permissive license, but actually includes OpenAI generated data, does that ToS supersede the license? - Does the dataset fall completely under fair use anyways, since the model isn't really capable of reproducing the entire training set verbatim? Use your best judgement and seek legal advice if you are concerned about the terms. In any case, by using this model, you agree to completely indemnify me.
ostapeno/newt_adaNeo1B_sciq_Multiple_Choice_sbs0.5_svdemb_sgd_full_ft_coarsegrained
ostapeno
2024-01-06T20:55:22Z
0
0
null
[ "region:us" ]
null
2024-01-06T16:07:39Z
Number of experts present in the library: 3 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | sciq_Multiple_Choice_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | | sciq_Multiple_Choice_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora | Last updated on: 2024-01-06 20:55:22+00:00
alirzb/S2_M1_R3_BEiT_42621830
alirzb
2024-01-06T20:49:17Z
4
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T19:37:38Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S2_M1_R3_BEiT_42621830 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S2_M1_R3_BEiT_42621830 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0128 - Accuracy: 0.9981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0088 | 1.0 | 307 | 0.0336 | 0.9942 | | 0.011 | 2.0 | 614 | 0.0439 | 0.9932 | | 0.0009 | 3.0 | 921 | 0.0163 | 0.9961 | | 0.003 | 4.0 | 1229 | 0.0130 | 0.9971 | | 0.0001 | 5.0 | 1535 | 0.0128 | 0.9981 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
Sararodriguezabou/arbol
Sararodriguezabou
2024-01-06T20:37:46Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-01-06T20:37:41Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Puranjay14/rl_course_vizdoom_health_gathering_supreme
Puranjay14
2024-01-06T20:30:34Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-06T20:15:02Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.85 +/- 4.77 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Puranjay14/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
SuYee189/wiki_bloom
SuYee189
2024-01-06T20:17:23Z
16
0
transformers
[ "transformers", "tensorboard", "safetensors", "bloom", "text-generation", "generated_from_trainer", "base_model:bigscience/bloom-560m", "base_model:finetune:bigscience/bloom-560m", "license:bigscience-bloom-rail-1.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T20:13:32Z
--- license: bigscience-bloom-rail-1.0 base_model: bigscience/bloom-560m tags: - generated_from_trainer model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
alirzb/S2_M1_R2_BEiT_42621227
alirzb
2024-01-06T20:14:45Z
147
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T18:57:48Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S2_M1_R2_BEiT_42621227 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S2_M1_R2_BEiT_42621227 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0197 - Accuracy: 0.9950 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0451 | 1.0 | 237 | 0.0198 | 0.9912 | | 0.0182 | 2.0 | 474 | 0.0110 | 0.9950 | | 0.0048 | 3.0 | 711 | 0.0192 | 0.9950 | | 0.0046 | 4.0 | 948 | 0.0259 | 0.9950 | | 0.0001 | 5.0 | 1185 | 0.0197 | 0.9950 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
adasdimchom/blip-image-captioning-large
adasdimchom
2024-01-06T20:14:04Z
14
0
transformers
[ "transformers", "pytorch", "tf", "safetensors", "blip", "image-text-to-text", "image-captioning", "image-to-text", "arxiv:2201.12086", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
image-to-text
2024-01-06T00:30:38Z
--- pipeline_tag: image-to-text tags: - image-captioning languages: - en license: bsd-3-clause --- # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
ac0hik/Sentiment_Analysis_French
ac0hik
2024-01-06T20:12:33Z
141
1
transformers
[ "transformers", "tensorboard", "safetensors", "camembert", "text-classification", "generated_from_trainer", "dataset:tweet_sentiment_multilingual", "base_model:almanach/camembert-base", "base_model:finetune:almanach/camembert-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-15T17:21:45Z
--- license: mit base_model: camembert-base tags: - generated_from_trainer datasets: - tweet_sentiment_multilingual metrics: - accuracy model-index: - name: camembert_model results: - task: name: Text Classification type: text-classification dataset: name: tweet_sentiment_multilingual type: tweet_sentiment_multilingual config: french split: validation args: french metrics: - name: Accuracy type: accuracy value: 0.7654320987654321 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert_model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the tweet_sentiment_multilingual dataset (French portion of it) . It achieves the following results on the evaluation set: - Loss: 0.7877 - Accuracy: 0.7654 ## Model description A sentiment Classifier for the french language classifies french text to positive, negative or neutral. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 115 | 0.8510 | 0.6265 | | No log | 2.0 | 230 | 0.7627 | 0.7130 | | No log | 3.0 | 345 | 0.6966 | 0.7160 | | No log | 4.0 | 460 | 0.6862 | 0.7438 | | 0.7126 | 5.0 | 575 | 0.6637 | 0.75 | | 0.7126 | 6.0 | 690 | 0.7121 | 0.7654 | | 0.7126 | 7.0 | 805 | 0.7641 | 0.7438 | | 0.7126 | 8.0 | 920 | 0.7662 | 0.7654 | | 0.2932 | 9.0 | 1035 | 0.7765 | 0.7747 | | 0.2932 | 10.0 | 1150 | 0.7877 | 0.7654 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
ntc-ai/SDXL-LoRA-slider.extremely-extremely-aesthetic
ntc-ai
2024-01-06T20:08:47Z
22
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-06T20:08:41Z
--- language: - en thumbnail: "images/evaluate/extremely extremely aesthetic.../extremely extremely aesthetic_17_3.0.png" widget: - text: extremely extremely aesthetic output: url: images/extremely extremely aesthetic_17_3.0.png - text: extremely extremely aesthetic output: url: images/extremely extremely aesthetic_19_3.0.png - text: extremely extremely aesthetic output: url: images/extremely extremely aesthetic_20_3.0.png - text: extremely extremely aesthetic output: url: images/extremely extremely aesthetic_21_3.0.png - text: extremely extremely aesthetic output: url: images/extremely extremely aesthetic_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "extremely extremely aesthetic" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - extremely extremely aesthetic (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/extremely extremely aesthetic_17_-3.0.png" width=256 height=256 /> | <img src="images/extremely extremely aesthetic_17_0.0.png" width=256 height=256 /> | <img src="images/extremely extremely aesthetic_17_3.0.png" width=256 height=256 /> | | <img src="images/extremely extremely aesthetic_19_-3.0.png" width=256 height=256 /> | <img src="images/extremely extremely aesthetic_19_0.0.png" width=256 height=256 /> | <img src="images/extremely extremely aesthetic_19_3.0.png" width=256 height=256 /> | | <img src="images/extremely extremely aesthetic_20_-3.0.png" width=256 height=256 /> | <img src="images/extremely extremely aesthetic_20_0.0.png" width=256 height=256 /> | <img src="images/extremely extremely aesthetic_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` extremely extremely aesthetic ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.extremely-extremely-aesthetic', weight_name='extremely extremely aesthetic.safetensors', adapter_name="extremely extremely aesthetic") # Activate the LoRA pipe.set_adapters(["extremely extremely aesthetic"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, extremely extremely aesthetic" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 910+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
VinodReddy29/ppo-Huggy
VinodReddy29
2024-01-06T19:55:19Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-06T19:55:11Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: VinodReddy29/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ostapeno/newt_adaNeo1B_ropes_read_background_situation_sbs0.5_svdemb_sgd_full_ft_finegrained
ostapeno
2024-01-06T19:52:02Z
0
0
null
[ "region:us" ]
null
2024-01-06T14:25:11Z
Number of experts present in the library: 4 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ropes_read_background_situation_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | Last updated on: 2024-01-06 19:52:01+00:00
alirzb/S2_M1_R1_BEiT_42621224
alirzb
2024-01-06T19:28:05Z
147
0
transformers
[ "transformers", "pytorch", "beit", "image-classification", "generated_from_trainer", "base_model:microsoft/beit-base-patch16-224", "base_model:finetune:microsoft/beit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T18:34:36Z
--- license: apache-2.0 base_model: microsoft/beit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S2_M1_R1_BEiT_42621224 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S2_M1_R1_BEiT_42621224 This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0388 | 1.0 | 231 | 0.0141 | 0.9949 | | 0.0418 | 2.0 | 463 | 0.0076 | 0.9987 | | 0.0004 | 3.0 | 694 | 0.0002 | 1.0 | | 0.0044 | 4.0 | 926 | 0.0003 | 1.0 | | 0.0001 | 4.99 | 1155 | 0.0001 | 1.0 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
CyberHarem/sheila_majonotabitabi
CyberHarem
2024-01-06T19:26:01Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/sheila_majonotabitabi", "license:mit", "region:us" ]
text-to-image
2024-01-06T19:18:20Z
--- license: mit datasets: - CyberHarem/sheila_majonotabitabi pipeline_tag: text-to-image tags: - art --- # Lora of sheila_majonotabitabi This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 3740, you need to download `3740/sheila_majonotabitabi.pt` as the embedding and `3740/sheila_majonotabitabi.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 3740**, with the score of 0.967. The trigger words are: 1. `sheila_majonotabitabi` 2. `blonde_hair, long_hair, green_eyes, hair_between_eyes, ponytail` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.879 | [Download](5100/sheila_majonotabitabi.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.908 | [Download](4760/sheila_majonotabitabi.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![pattern_4-4760](4760/previews/pattern_4.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | 4420 | 0.946 | [Download](4420/sheila_majonotabitabi.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![pattern_4-4420](4420/previews/pattern_4.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.886 | [Download](4080/sheila_majonotabitabi.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | **3740** | **0.967** | [**Download**](3740/sheila_majonotabitabi.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![pattern_4-3740](3740/previews/pattern_4.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.860 | [Download](3400/sheila_majonotabitabi.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![pattern_4-3400](3400/previews/pattern_4.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.895 | [Download](3060/sheila_majonotabitabi.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.847 | [Download](2720/sheila_majonotabitabi.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![pattern_4-2720](2720/previews/pattern_4.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.934 | [Download](2380/sheila_majonotabitabi.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![pattern_4-2380](2380/previews/pattern_4.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.924 | [Download](2040/sheila_majonotabitabi.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.849 | [Download](1700/sheila_majonotabitabi.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![pattern_4-1700](1700/previews/pattern_4.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.828 | [Download](1360/sheila_majonotabitabi.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![pattern_4-1360](1360/previews/pattern_4.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.761 | [Download](1020/sheila_majonotabitabi.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.704 | [Download](680/sheila_majonotabitabi.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![pattern_4-680](680/previews/pattern_4.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.656 | [Download](340/sheila_majonotabitabi.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![pattern_4-340](340/previews/pattern_4.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
mus-shd/ppo-Pyramids
mus-shd
2024-01-06T19:21:25Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-01-06T19:21:23Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mus-shd/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jackoyoungblood/GPT2_Original2
jackoyoungblood
2024-01-06T19:11:54Z
8
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T19:11:23Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: GPT2_Original2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2_Original2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 150 - eval_batch_size: 150 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 1200 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
CatFlowerGames/clipseg-rd64-refined-cat
CatFlowerGames
2024-01-06T18:59:25Z
7
0
transformers
[ "transformers", "pytorch", "safetensors", "clipseg", "vision", "image-segmentation", "arxiv:2112.10003", "license:apache-2.0", "region:us" ]
image-segmentation
2024-01-06T18:55:10Z
--- license: apache-2.0 tags: - vision - image-segmentation inference: false --- # CLIPSeg model CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Lüddecke et al. and first released in [this repository](https://github.com/timojl/clipseg). # Intended use cases This model is intended for zero-shot and one-shot image segmentation. # Usage Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/clipseg).
ladoza03/xlm-roberta-base-finetuned-panx-nl
ladoza03
2024-01-06T18:48:14Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-06T18:36:15Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-nl This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1406 - F1: 0.9110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2478 | 1.0 | 1250 | 0.1519 | 0.8691 | | 0.1247 | 2.0 | 2500 | 0.1346 | 0.8930 | | 0.0817 | 3.0 | 3750 | 0.1291 | 0.9064 | | 0.049 | 4.0 | 5000 | 0.1406 | 0.9110 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
CyberHarem/flan_majonotabitabi
CyberHarem
2024-01-06T18:47:11Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/flan_majonotabitabi", "license:mit", "region:us" ]
text-to-image
2024-01-06T18:39:52Z
--- license: mit datasets: - CyberHarem/flan_majonotabitabi pipeline_tag: text-to-image tags: - art --- # Lora of flan_majonotabitabi This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 4420, you need to download `4420/flan_majonotabitabi.pt` as the embedding and `4420/flan_majonotabitabi.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 4420**, with the score of 0.923. The trigger words are: 1. `flan_majonotabitabi` 2. `black_hair, long_hair, hair_over_one_eye, mole, mole_under_eye, ribbon, hat, witch_hat, smile, closed_eyes, blue_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:---------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 5100 | 0.877 | [Download](5100/flan_majonotabitabi.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4760 | 0.911 | [Download](4760/flan_majonotabitabi.zip) | ![pattern_1-4760](4760/previews/pattern_1.png) | ![pattern_2-4760](4760/previews/pattern_2.png) | ![pattern_3-4760](4760/previews/pattern_3.png) | ![bikini-4760](4760/previews/bikini.png) | [<NSFW, click to see>](4760/previews/bondage.png) | ![free-4760](4760/previews/free.png) | ![maid-4760](4760/previews/maid.png) | ![miko-4760](4760/previews/miko.png) | [<NSFW, click to see>](4760/previews/nude.png) | [<NSFW, click to see>](4760/previews/nude2.png) | ![suit-4760](4760/previews/suit.png) | ![yukata-4760](4760/previews/yukata.png) | | **4420** | **0.923** | [**Download**](4420/flan_majonotabitabi.zip) | ![pattern_1-4420](4420/previews/pattern_1.png) | ![pattern_2-4420](4420/previews/pattern_2.png) | ![pattern_3-4420](4420/previews/pattern_3.png) | ![bikini-4420](4420/previews/bikini.png) | [<NSFW, click to see>](4420/previews/bondage.png) | ![free-4420](4420/previews/free.png) | ![maid-4420](4420/previews/maid.png) | ![miko-4420](4420/previews/miko.png) | [<NSFW, click to see>](4420/previews/nude.png) | [<NSFW, click to see>](4420/previews/nude2.png) | ![suit-4420](4420/previews/suit.png) | ![yukata-4420](4420/previews/yukata.png) | | 4080 | 0.744 | [Download](4080/flan_majonotabitabi.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3740 | 0.902 | [Download](3740/flan_majonotabitabi.zip) | ![pattern_1-3740](3740/previews/pattern_1.png) | ![pattern_2-3740](3740/previews/pattern_2.png) | ![pattern_3-3740](3740/previews/pattern_3.png) | ![bikini-3740](3740/previews/bikini.png) | [<NSFW, click to see>](3740/previews/bondage.png) | ![free-3740](3740/previews/free.png) | ![maid-3740](3740/previews/maid.png) | ![miko-3740](3740/previews/miko.png) | [<NSFW, click to see>](3740/previews/nude.png) | [<NSFW, click to see>](3740/previews/nude2.png) | ![suit-3740](3740/previews/suit.png) | ![yukata-3740](3740/previews/yukata.png) | | 3400 | 0.901 | [Download](3400/flan_majonotabitabi.zip) | ![pattern_1-3400](3400/previews/pattern_1.png) | ![pattern_2-3400](3400/previews/pattern_2.png) | ![pattern_3-3400](3400/previews/pattern_3.png) | ![bikini-3400](3400/previews/bikini.png) | [<NSFW, click to see>](3400/previews/bondage.png) | ![free-3400](3400/previews/free.png) | ![maid-3400](3400/previews/maid.png) | ![miko-3400](3400/previews/miko.png) | [<NSFW, click to see>](3400/previews/nude.png) | [<NSFW, click to see>](3400/previews/nude2.png) | ![suit-3400](3400/previews/suit.png) | ![yukata-3400](3400/previews/yukata.png) | | 3060 | 0.899 | [Download](3060/flan_majonotabitabi.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2720 | 0.712 | [Download](2720/flan_majonotabitabi.zip) | ![pattern_1-2720](2720/previews/pattern_1.png) | ![pattern_2-2720](2720/previews/pattern_2.png) | ![pattern_3-2720](2720/previews/pattern_3.png) | ![bikini-2720](2720/previews/bikini.png) | [<NSFW, click to see>](2720/previews/bondage.png) | ![free-2720](2720/previews/free.png) | ![maid-2720](2720/previews/maid.png) | ![miko-2720](2720/previews/miko.png) | [<NSFW, click to see>](2720/previews/nude.png) | [<NSFW, click to see>](2720/previews/nude2.png) | ![suit-2720](2720/previews/suit.png) | ![yukata-2720](2720/previews/yukata.png) | | 2380 | 0.907 | [Download](2380/flan_majonotabitabi.zip) | ![pattern_1-2380](2380/previews/pattern_1.png) | ![pattern_2-2380](2380/previews/pattern_2.png) | ![pattern_3-2380](2380/previews/pattern_3.png) | ![bikini-2380](2380/previews/bikini.png) | [<NSFW, click to see>](2380/previews/bondage.png) | ![free-2380](2380/previews/free.png) | ![maid-2380](2380/previews/maid.png) | ![miko-2380](2380/previews/miko.png) | [<NSFW, click to see>](2380/previews/nude.png) | [<NSFW, click to see>](2380/previews/nude2.png) | ![suit-2380](2380/previews/suit.png) | ![yukata-2380](2380/previews/yukata.png) | | 2040 | 0.842 | [Download](2040/flan_majonotabitabi.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1700 | 0.726 | [Download](1700/flan_majonotabitabi.zip) | ![pattern_1-1700](1700/previews/pattern_1.png) | ![pattern_2-1700](1700/previews/pattern_2.png) | ![pattern_3-1700](1700/previews/pattern_3.png) | ![bikini-1700](1700/previews/bikini.png) | [<NSFW, click to see>](1700/previews/bondage.png) | ![free-1700](1700/previews/free.png) | ![maid-1700](1700/previews/maid.png) | ![miko-1700](1700/previews/miko.png) | [<NSFW, click to see>](1700/previews/nude.png) | [<NSFW, click to see>](1700/previews/nude2.png) | ![suit-1700](1700/previews/suit.png) | ![yukata-1700](1700/previews/yukata.png) | | 1360 | 0.672 | [Download](1360/flan_majonotabitabi.zip) | ![pattern_1-1360](1360/previews/pattern_1.png) | ![pattern_2-1360](1360/previews/pattern_2.png) | ![pattern_3-1360](1360/previews/pattern_3.png) | ![bikini-1360](1360/previews/bikini.png) | [<NSFW, click to see>](1360/previews/bondage.png) | ![free-1360](1360/previews/free.png) | ![maid-1360](1360/previews/maid.png) | ![miko-1360](1360/previews/miko.png) | [<NSFW, click to see>](1360/previews/nude.png) | [<NSFW, click to see>](1360/previews/nude2.png) | ![suit-1360](1360/previews/suit.png) | ![yukata-1360](1360/previews/yukata.png) | | 1020 | 0.574 | [Download](1020/flan_majonotabitabi.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) | | 680 | 0.454 | [Download](680/flan_majonotabitabi.zip) | ![pattern_1-680](680/previews/pattern_1.png) | ![pattern_2-680](680/previews/pattern_2.png) | ![pattern_3-680](680/previews/pattern_3.png) | ![bikini-680](680/previews/bikini.png) | [<NSFW, click to see>](680/previews/bondage.png) | ![free-680](680/previews/free.png) | ![maid-680](680/previews/maid.png) | ![miko-680](680/previews/miko.png) | [<NSFW, click to see>](680/previews/nude.png) | [<NSFW, click to see>](680/previews/nude2.png) | ![suit-680](680/previews/suit.png) | ![yukata-680](680/previews/yukata.png) | | 340 | 0.860 | [Download](340/flan_majonotabitabi.zip) | ![pattern_1-340](340/previews/pattern_1.png) | ![pattern_2-340](340/previews/pattern_2.png) | ![pattern_3-340](340/previews/pattern_3.png) | ![bikini-340](340/previews/bikini.png) | [<NSFW, click to see>](340/previews/bondage.png) | ![free-340](340/previews/free.png) | ![maid-340](340/previews/maid.png) | ![miko-340](340/previews/miko.png) | [<NSFW, click to see>](340/previews/nude.png) | [<NSFW, click to see>](340/previews/nude2.png) | ![suit-340](340/previews/suit.png) | ![yukata-340](340/previews/yukata.png) |
Panchovix/goliath-120b-exl2-4.25bpw-rpcal
Panchovix
2024-01-06T18:41:09Z
12
1
transformers
[ "transformers", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-11-26T22:43:29Z
--- license: llama2 --- EXL2 quant of alpindale/goliath-120b (https://huggingface.co/alpindale/goliath-120b), to be used on exllamav2. 4.25bpw to being to able to use CFG comfortably on 72GB VRAM. (20,21,22 for gpu split) Update 06/01/2024: Updated with new quant method after some time, thanks for the measurement [here](https://github.com/turboderp/exllamav2/files/13846439/goliath-120b-rpcal-measurement.json) Calibration dataset is a cleaned, fixed pippa RP dataset, which does affect the results (in favor) for RP usage. You can find the calibration dataset [here](https://huggingface.co/datasets/royallab/PIPPA-cleaned) I've added a measurement.json file if you want to do your own quants. # Original model card # Goliath 120B An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one. Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix): - [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp) - [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite) - [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM) - [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI) # Prompting Format Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best. # Merge process The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B). The layer ranges used are as follows: ```yaml - range 0, 16 Xwin - range 8, 24 Euryale - range 17, 32 Xwin - range 25, 40 Euryale - range 33, 48 Xwin - range 41, 56 Euryale - range 49, 64 Xwin - range 57, 72 Euryale - range 65, 80 Xwin ``` # Screenshots ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/Cat8_Rimaz6Ni7YhQiiGB.png) # Benchmarks Coming soon. # Acknowledgements Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
alirzb/S5_M1_fold4_ViT_42618593
alirzb
2024-01-06T18:27:46Z
177
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T16:41:15Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S5_M1_fold4_ViT_42618593 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S5_M1_fold4_ViT_42618593 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0091 - Accuracy: 0.9992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0072 | 1.0 | 368 | 0.0147 | 0.9960 | | 0.0161 | 2.0 | 737 | 0.0104 | 0.9984 | | 0.0012 | 3.0 | 1105 | 0.0104 | 0.9976 | | 0.0001 | 4.0 | 1474 | 0.0091 | 0.9992 | | 0.0 | 4.99 | 1840 | 0.0091 | 0.9992 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
1nferno/Fake_news_detection
1nferno
2024-01-06T18:27:02Z
15
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-06T14:40:24Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: Fake_news_detection results: [] widget: - text: "Former CIA director John Brennan on Friday criticized as “disgraceful” President Donald Trump’s efforts to play down U.S. intelligence agencies’ assessment that Russia meddled in the 2016 U.S. election. Trump’s administration has been dogged by investigations into allegations of Russian interference in last year’s U.S. presidential election and possible ties with his campaign team. Speaking one day before his first meeting with Russian President Vladimir Putin in Hamburg earlier this month, Trump said he suspected Russian interference in the election but that no one knows for sure. “These types of comments are just disgraceful ... and the person who said them should be ashamed of himself,” said Brennan, CIA chief under former President Barack Obama, at the Aspen Security Forum. Special Counsel Robert Mueller and several U.S. congressional committees are investigating whether Russia interfered in the election and colluded with Trump’s campaign to try to swing the race in his favor over Democratic rival Hillary Clinton. Those probes are focused almost exclusively on Moscow’s actions, lawmakers and intelligence officials have said, and no evidence has surfaced publicly implicating other countries. Moscow has denied any interference, and Trump has said that his campaign did not collude with Russia. Brennan said he was disappointed by the president’s handling of security issues in his first six months in office." example_title: "Real News" - text: "Bravo! These two great Americans make me have hope for the politicians we elect. They re doing a damn good job of exposing the phony Iran deal. They both are veterans and super smart so maybe they ve been able to outsmart the Obama thugs. I love these guys!Rep. Mike Pompeo (R Kan.) and Sen. Tom Cotton (R Ark.) have a lot in common. Both are army veterans and both are graduates of Harvard Law School. And both have been doing a great job of exposing aspects of the nuclear deal with Iran that the administration would rather keep quiet.This week it was reported that an inquiry from Pompeo got the State Department to admit that the nuclear deal was never signed and is not legally binding. Julia Frifield, the Assistant Secretary of State for Legislative Affairs, wrote in response to Pompeo s inquiry if he could see the signed agreement, in a letter reproduced at the congressman s website, that the nuclear deal was not binding and that it was not signed by any party. The key parts of the letter read:The Joint Comprehensive Plan of Action (JCPOA) is not a treaty or an executive agreement, and is not a signed document The success of the JCPOA will depend not on whether it is legally binding or signed, but rather on the extensive verification measures we have put in place, as well as Iran s understanding that we have the capacity to re-impose and ramp up our sanctions if Iran does not meet its commitments.Frifield asserted that the JCPOA was not a signed agreement but reflections of political commitments between Iran and the P5+1 nations the United States, Britain, France, China, Russia and Germany.Pompeo responded, For the State Department to try to defend the unsigned and non-binding Iran nuclear agreement by calling it a political commitment is about as absurd as the terms of the deal itself." example_title: "Fake news" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fake_news_detection This model is a fine-tuned version of Bert on an Fake news dataset ( https://huggingface.co/datasets/ErfanMoosaviMonazzah/fake-news-detection-dataset-English ). ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0015 | 1.0 | 2245 | 0.0000 | 1.0 | | 0.0005 | 2.0 | 4490 | 0.0000 | 1.0 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
ladoza03/xlm-roberta-base-finetuned-panx-vi
ladoza03
2024-01-06T18:24:02Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-06T18:13:45Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-vi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-vi This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1951 - F1: 0.9134 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.383 | 1.0 | 1250 | 0.2193 | 0.8558 | | 0.2044 | 2.0 | 2500 | 0.2182 | 0.8780 | | 0.1361 | 3.0 | 3750 | 0.1889 | 0.8980 | | 0.0888 | 4.0 | 5000 | 0.1951 | 0.9134 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
Tech-Meld/HS-Instructed
Tech-Meld
2024-01-06T18:23:25Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-01-06T18:23:25Z
--- license: creativeml-openrail-m ---
rohansolo/BB-L-01-7B-mlx
rohansolo
2024-01-06T18:13:38Z
1
0
mlx
[ "mlx", "mistral", "alignment-handbook", "generated_from_trainer", "hi", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:rohansolo/BB_HindiHinglish", "base_model:mistralai/Mistral-7B-v0.1", "base_model:finetune:mistralai/Mistral-7B-v0.1", "license:cc-by-nc-4.0", "region:us" ]
null
2024-01-06T18:03:24Z
--- language: - hi license: cc-by-nc-4.0 tags: - alignment-handbook - generated_from_trainer - mlx datasets: - HuggingFaceH4/ultrachat_200k - rohansolo/BB_HindiHinglish base_model: mistralai/Mistral-7B-v0.1 model-index: - name: BB-L-01-7B results: [] --- # BB-L-01-7B-mlx This model was converted to MLX format from [`rohansolo/BB-L-01-7B`](). Refer to the [original model card](https://huggingface.co/rohansolo/BB-L-01-7B) for more details on the model. ## Use with mlx ```bash pip install mlx git clone https://github.com/ml-explore/mlx-examples.git cd mlx-examples/llms/hf_llm python generate.py --model rohansolo/BB-L-01-7B-mlx --prompt "<|system|> You are a helpful AI assistant</s> <|user|> एक पाइथन स्क्रिप्ट लिखो बबल सॉर्ट के लिए</s>" ```
ladoza03/xlm-roberta-base-finetuned-panx-en
ladoza03
2024-01-06T18:13:22Z
5
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-06T14:59:16Z
--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2771 - F1: 0.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3975 | 1.0 | 1250 | 0.2850 | 0.7912 | | 0.2466 | 2.0 | 2500 | 0.2563 | 0.8094 | | 0.178 | 3.0 | 3750 | 0.2654 | 0.8260 | | 0.1273 | 4.0 | 5000 | 0.2771 | 0.8326 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
devnote5676/schwartz-values-classifier
devnote5676
2024-01-06T18:12:33Z
26
2
transformers
[ "transformers", "safetensors", "bert", "text-classification", "social-values", "en", "dataset:webis/Touche23-ValueEval", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-04T18:28:13Z
--- license: mit datasets: - webis/Touche23-ValueEval language: - en metrics: - f1 tags: - social-values --- # Schwartz Value Classifier This classifier is intended to predict the existence of social values from text snippets. *Disclaimer: this is not the official repo published by the authors of the paper, and may not truly replicate the performance described in the original study* ## Value dimensions In this model we follow the 10-dimensional categorization of the Schwartz values. [link](https://en.wikipedia.org/wiki/Theory_of_basic_human_values) 1. security – safety, harmony, and stability of society, of relationships, and of self 2. power – social status and prestige, control or dominance over people and resources 3. achievement – personal success through demonstrating competence according to social standards 4. hedonism – pleasure or sensuous gratification for oneself 5. stimulation – excitement, novelty and challenge in life 6. self-direction – independent thought and action—choosing, creating, exploring 7. universalism – understanding, appreciation, tolerance, and protection for the welfare of all people and for nature 8. benevolence – preserving and enhancing the welfare of those with whom one is in frequent personal contact (the 'in-group') 9. conformity – restraint of actions, inclinations, and impulses likely to upset or harm others and violate social expectations or norms 10. tradition – respect, commitment, and acceptance of the customs and ideas that one's culture or religion provides ## Datasets This model is finetuned on two datasets: ValueNet (A New Dataset for Human Value Driven Dialogue System, Qiu et al. 2021) and Touche23-ValueEval (The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments, Mirzakhmedova et al., 2023). We follow the original paper to convert both datasets into a binary classification task for each dimension. - ValueNet - A sentence has a positive label if the original label contains 1 (positive) or -1 (negative), and 0 if the original label is 0. - ValueEval - A sentence is assigned a positive label if the original label vector is marked 1 for that dimension. - Since the original paper follows a 20-dimension refined categorization, we map them back to 10 dimensions. Therefore, the same sentence appears ten times, once for each dimension. ## How to use Start your sentence with a label that indicates which dimension to measure. An example would be: - \<power> [SEP] staying out late after telling my girlfriend I could be home early Please make sure to follow the exact format "<value\_name>" at the beginning of the sentence as this is a special token in the tokenizer: any spaces or different formats will not be encoded correctly. ## Performances - macro F1 score - on ValueNet: 0.648 - on ValueEval: 0.744 - Combined: 0.759 - ROC-AUC - on ValueNet: 0.736 - on ValueEval:0.847 - Combined: 0.855 ## Training details - Base model: bert-base-uncased - Epochs: 10 w/ early stopping after no F1 increase in 3 epochs - Learning rate: 5e-5 w/ warmup for 0.03 steps and subsequent linear decay - Batch size: 32 - Upsampled training set to maintain 1:1 balance for pos:neg labels. ## References - Do Differences in Values Influence Disagreements in Online Discussions? (EMNLP'23) [link](https://aclanthology.org/2023.emnlp-main.992/)
ludis/tsukasa-13b-qlora-limarp
ludis
2024-01-06T18:00:20Z
26
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:PygmalionAI/PIPPA", "dataset:ludis/geepeetee4", "dataset:lemonilia/LimaRP", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-08T04:38:37Z
--- datasets: - PygmalionAI/PIPPA - ludis/geepeetee4 - lemonilia/LimaRP --- ## Prompting https://rentry.org/tsukasa13b - reccomended prompts and gen settings The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ## Training base model (llama-2-13b-hf) tuned on koishi dataset (commit c83d922) for 1 epoch then tuned on pippa dataset (commit 6412b0c) for 1 epoch then tuned on geepeetee4 dataset (commit c83d922) for 1 epoch then tuned on limarp (without ponyville, lolicit, and all the fallen subsets. Version 2023-09-14) for 2 epochs
ludis/tsukasa-limarp-7b
ludis
2024-01-06T17:56:49Z
11
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:PygmalionAI/PIPPA", "dataset:lemonilia/LimaRP", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-09-03T17:43:29Z
--- datasets: - PygmalionAI/PIPPA - lemonilia/LimaRP --- ## Prompting https://rentry.org/v43eo - reccomended prompts and gen settings The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ## Training base model (llama-2-7b-hf) tuned on commit de693ac of the koishi dataset for 1 epoch as apart of ludis/tsukasa-7b then tuned on commit 36fc235 of pippa metharme for 1 epoch as apart of ludis/tsukasa-7b then tuned on Version 2023-09-03 of LimaRP (without ponyville, lolicit, all the fallen, and eka's portal subsets) for 2 epochs
Serpol1999/serpol1999-test
Serpol1999
2024-01-06T17:56:28Z
0
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-06T17:49:59Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Serpol1999/test Dreambooth model trained by Serpol1999 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
parsak/mistralcode-7b-instruct-lora-adapters
parsak
2024-01-06T17:53:25Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "region:us" ]
null
2024-01-06T17:12:37Z
--- library_name: peft base_model: mistralai/Mistral-7B-Instruct-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
mandaaarina/gradio-test
mandaaarina
2024-01-06T17:51:54Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-01-06T17:51:48Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
ostapeno/newt_adaNeo1B_ropes_read_background_situation_sbs0.5_svdemb_sgd_full_ft_coarsegrained
ostapeno
2024-01-06T17:50:15Z
0
0
null
[ "region:us" ]
null
2024-01-06T12:10:54Z
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ropes_read_background_situation_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v8 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | | ropes_read_background_situation_v9 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora | Last updated on: 2024-01-06 17:50:14+00:00
junzhangli/my_awesome_billsum_model
junzhangli
2024-01-06T17:49:33Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "led", "text2text-generation", "generated_from_trainer", "base_model:allenai/led-base-16384", "base_model:finetune:allenai/led-base-16384", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-06T17:21:24Z
--- license: apache-2.0 base_model: allenai/led-base-16384 tags: - generated_from_trainer model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.7927 - eval_rouge1: 0.1895 - eval_rouge2: 0.0969 - eval_rougeL: 0.1641 - eval_rougeLsum: 0.1692 - eval_gen_len: 20.0 - eval_runtime: 47.5253 - eval_samples_per_second: 5.218 - eval_steps_per_second: 1.052 - epoch: 2.0 - step: 396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
IBB-University/ghadeer_classifecation_news
IBB-University
2024-01-06T17:48:22Z
7
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "ar", "dataset:IBB-University/Ghadeer_news", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-05T18:48:28Z
--- language: - ar library_name: transformers pipeline_tag: text-classification metrics: - accuracy widget: - text: ' جماعة انصار الله الحوثية تقوم بعدل حشد جماهيري ضد العدوان الغاشم على اليمن' - text: 'ارتفاع اسعار المواد الغذائية في الاسواق مما يؤدي إلى ازمة في الاقتصاد' - text: 'فوز المنتخب اليمني برياضة كرة القدم فاز بكاس غرب اسياء للناشئين' - text: 'يف سيؤثر الذكاء الاصطناعي التوليدي على الصناعات العالمية الكبرى ف' datasets: - IBB-University/Ghadeer_news --- # Classification Of Arabic News Using Arabert -One of the famous models that use transform networks in Arabic language classification is “BERT” (Bidirectional Encoder Representations from Transformers). BERT is trained on a huge amount of diverse linguistic data, including Arabic, which allows it to better understand language relationships. Other BERT-based models have been developed to improve classification performance in Arabic, such as “AraBERT” and “ARA-BERT”. These models are trained on large data specific to the Arabic language, allowing them to achieve outstanding classification performance for the Arabic language. The use of transfer networks in Arabic classification is currently an active area of research and development, with researchers and engineers working to improve existing models and develop new techniques to meet the challenges of Arabic and improve classification accuracy in this context. # Google Scholar has our Bibtex wrong (missing name), use this instead @inproceedings{antoun2020arabert, title={AraBERT: Transformer-based Model for Arabic Language Understanding}, author={Antoun, Wissam and Baly, Fady and Hajj, Hazem}, booktitle={LREC 2020 Workshop Language Resources and Evaluation Conference 11--16 May 2020}, pages={9} } # DATASET | | | | :-------------------: | :-----------:| | Local |5000 | | Sports |5000 | | Policy | 5000 | |Economy |5000 | |Cultural |5000 | |Technology |5000 | # LABEL_DATASET | | | | :--------------------------: | :-----------:| |lable_0 | رياضية | |lable_ سياسية | 1 | |lable_اقتصاد | 2 | |lable_تكنولوجيا | 3 | |lable_محلية | 4 | |lable_ثقافية | 5 | # Training parameters | | | | :-------------------: | :-----------:| | Training batch size | `8` | | Evaluation batch size | `8` | | Learning rate | `2e-5` | | Max length target | `203` | | Epoch | `1 ` | | | | # # Results | | | | :---------------------: | :-----------: | | raining Loss: | `0.21533072472327064 | | Classification Accuracy | `0.1619285045662197` | | Validation Accuracy: | `0.9664634146341463` | | | |
Sunirmala/Medic-Phi2
Sunirmala
2024-01-06T17:45:37Z
0
0
adapter-transformers
[ "adapter-transformers", "safetensors", "phi-msft", "custom_code", "license:mit", "region:us" ]
null
2024-01-06T15:24:23Z
--- license: mit library_name: adapter-transformers ---
alirzb/S5_M1_fold3_ViT_42618589
alirzb
2024-01-06T17:34:46Z
175
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-06T16:20:06Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S5_M1_fold3_ViT_42618589 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S5_M1_fold3_ViT_42618589 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0068 - Accuracy: 0.9984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0026 | 1.0 | 368 | 0.0069 | 0.9976 | | 0.0052 | 2.0 | 737 | 0.0094 | 0.9984 | | 0.0006 | 3.0 | 1105 | 0.0086 | 0.9984 | | 0.0 | 4.0 | 1474 | 0.0068 | 0.9984 | | 0.0 | 4.99 | 1840 | 0.0068 | 0.9984 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
CyberHarem/elaina_majonotabitabi
CyberHarem
2024-01-06T17:30:26Z
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/elaina_majonotabitabi", "license:mit", "region:us" ]
text-to-image
2024-01-06T17:13:45Z
--- license: mit datasets: - CyberHarem/elaina_majonotabitabi pipeline_tag: text-to-image tags: - art --- # Lora of elaina_majonotabitabi This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 7140, you need to download `7140/elaina_majonotabitabi.pt` as the embedding and `7140/elaina_majonotabitabi.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 7140**, with the score of 0.953. The trigger words are: 1. `elaina_majonotabitabi` 2. `long_hair, bangs, hair_between_eyes, blue_eyes, closed_mouth, grey_hair, bow, white_hair, hat, witch_hat, black_headwear, purple_eyes` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | pattern_15 | pattern_16 | pattern_17 | pattern_18 | pattern_19 | pattern_20 | pattern_21 | pattern_22 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:---------------------------------------------------|:-------------------------------------------|:---------------------------------------------------|:---------------------------------------|:---------------------------------------|:---------------------------------------|:------------------------------------------------|:-------------------------------------------------|:---------------------------------------|:-------------------------------------------| | 15300 | 0.943 | [Download](15300/elaina_majonotabitabi.zip) | ![pattern_1-15300](15300/previews/pattern_1.png) | ![pattern_2-15300](15300/previews/pattern_2.png) | ![pattern_3-15300](15300/previews/pattern_3.png) | ![pattern_4-15300](15300/previews/pattern_4.png) | ![pattern_5-15300](15300/previews/pattern_5.png) | ![pattern_6-15300](15300/previews/pattern_6.png) | ![pattern_7-15300](15300/previews/pattern_7.png) | ![pattern_8-15300](15300/previews/pattern_8.png) | ![pattern_9-15300](15300/previews/pattern_9.png) | ![pattern_10-15300](15300/previews/pattern_10.png) | ![pattern_11-15300](15300/previews/pattern_11.png) | ![pattern_12-15300](15300/previews/pattern_12.png) | ![pattern_13-15300](15300/previews/pattern_13.png) | ![pattern_14-15300](15300/previews/pattern_14.png) | ![pattern_15-15300](15300/previews/pattern_15.png) | ![pattern_16-15300](15300/previews/pattern_16.png) | ![pattern_17-15300](15300/previews/pattern_17.png) | ![pattern_18-15300](15300/previews/pattern_18.png) | ![pattern_19-15300](15300/previews/pattern_19.png) | ![pattern_20-15300](15300/previews/pattern_20.png) | ![pattern_21-15300](15300/previews/pattern_21.png) | ![pattern_22-15300](15300/previews/pattern_22.png) | ![bikini-15300](15300/previews/bikini.png) | [<NSFW, click to see>](15300/previews/bondage.png) | ![free-15300](15300/previews/free.png) | ![maid-15300](15300/previews/maid.png) | ![miko-15300](15300/previews/miko.png) | [<NSFW, click to see>](15300/previews/nude.png) | [<NSFW, click to see>](15300/previews/nude2.png) | ![suit-15300](15300/previews/suit.png) | ![yukata-15300](15300/previews/yukata.png) | | 14280 | 0.949 | [Download](14280/elaina_majonotabitabi.zip) | ![pattern_1-14280](14280/previews/pattern_1.png) | ![pattern_2-14280](14280/previews/pattern_2.png) | ![pattern_3-14280](14280/previews/pattern_3.png) | ![pattern_4-14280](14280/previews/pattern_4.png) | ![pattern_5-14280](14280/previews/pattern_5.png) | ![pattern_6-14280](14280/previews/pattern_6.png) | ![pattern_7-14280](14280/previews/pattern_7.png) | ![pattern_8-14280](14280/previews/pattern_8.png) | ![pattern_9-14280](14280/previews/pattern_9.png) | ![pattern_10-14280](14280/previews/pattern_10.png) | ![pattern_11-14280](14280/previews/pattern_11.png) | ![pattern_12-14280](14280/previews/pattern_12.png) | ![pattern_13-14280](14280/previews/pattern_13.png) | ![pattern_14-14280](14280/previews/pattern_14.png) | ![pattern_15-14280](14280/previews/pattern_15.png) | ![pattern_16-14280](14280/previews/pattern_16.png) | ![pattern_17-14280](14280/previews/pattern_17.png) | ![pattern_18-14280](14280/previews/pattern_18.png) | ![pattern_19-14280](14280/previews/pattern_19.png) | ![pattern_20-14280](14280/previews/pattern_20.png) | ![pattern_21-14280](14280/previews/pattern_21.png) | ![pattern_22-14280](14280/previews/pattern_22.png) | ![bikini-14280](14280/previews/bikini.png) | [<NSFW, click to see>](14280/previews/bondage.png) | ![free-14280](14280/previews/free.png) | ![maid-14280](14280/previews/maid.png) | ![miko-14280](14280/previews/miko.png) | [<NSFW, click to see>](14280/previews/nude.png) | [<NSFW, click to see>](14280/previews/nude2.png) | ![suit-14280](14280/previews/suit.png) | ![yukata-14280](14280/previews/yukata.png) | | 13260 | 0.953 | [Download](13260/elaina_majonotabitabi.zip) | ![pattern_1-13260](13260/previews/pattern_1.png) | ![pattern_2-13260](13260/previews/pattern_2.png) | ![pattern_3-13260](13260/previews/pattern_3.png) | ![pattern_4-13260](13260/previews/pattern_4.png) | ![pattern_5-13260](13260/previews/pattern_5.png) | ![pattern_6-13260](13260/previews/pattern_6.png) | ![pattern_7-13260](13260/previews/pattern_7.png) | ![pattern_8-13260](13260/previews/pattern_8.png) | ![pattern_9-13260](13260/previews/pattern_9.png) | ![pattern_10-13260](13260/previews/pattern_10.png) | ![pattern_11-13260](13260/previews/pattern_11.png) | ![pattern_12-13260](13260/previews/pattern_12.png) | ![pattern_13-13260](13260/previews/pattern_13.png) | ![pattern_14-13260](13260/previews/pattern_14.png) | ![pattern_15-13260](13260/previews/pattern_15.png) | ![pattern_16-13260](13260/previews/pattern_16.png) | ![pattern_17-13260](13260/previews/pattern_17.png) | ![pattern_18-13260](13260/previews/pattern_18.png) | ![pattern_19-13260](13260/previews/pattern_19.png) | ![pattern_20-13260](13260/previews/pattern_20.png) | ![pattern_21-13260](13260/previews/pattern_21.png) | ![pattern_22-13260](13260/previews/pattern_22.png) | ![bikini-13260](13260/previews/bikini.png) | [<NSFW, click to see>](13260/previews/bondage.png) | ![free-13260](13260/previews/free.png) | ![maid-13260](13260/previews/maid.png) | ![miko-13260](13260/previews/miko.png) | [<NSFW, click to see>](13260/previews/nude.png) | [<NSFW, click to see>](13260/previews/nude2.png) | ![suit-13260](13260/previews/suit.png) | ![yukata-13260](13260/previews/yukata.png) | | 12240 | 0.952 | [Download](12240/elaina_majonotabitabi.zip) | ![pattern_1-12240](12240/previews/pattern_1.png) | ![pattern_2-12240](12240/previews/pattern_2.png) | ![pattern_3-12240](12240/previews/pattern_3.png) | ![pattern_4-12240](12240/previews/pattern_4.png) | ![pattern_5-12240](12240/previews/pattern_5.png) | ![pattern_6-12240](12240/previews/pattern_6.png) | ![pattern_7-12240](12240/previews/pattern_7.png) | ![pattern_8-12240](12240/previews/pattern_8.png) | ![pattern_9-12240](12240/previews/pattern_9.png) | ![pattern_10-12240](12240/previews/pattern_10.png) | ![pattern_11-12240](12240/previews/pattern_11.png) | ![pattern_12-12240](12240/previews/pattern_12.png) | ![pattern_13-12240](12240/previews/pattern_13.png) | ![pattern_14-12240](12240/previews/pattern_14.png) | ![pattern_15-12240](12240/previews/pattern_15.png) | ![pattern_16-12240](12240/previews/pattern_16.png) | ![pattern_17-12240](12240/previews/pattern_17.png) | ![pattern_18-12240](12240/previews/pattern_18.png) | ![pattern_19-12240](12240/previews/pattern_19.png) | ![pattern_20-12240](12240/previews/pattern_20.png) | ![pattern_21-12240](12240/previews/pattern_21.png) | ![pattern_22-12240](12240/previews/pattern_22.png) | ![bikini-12240](12240/previews/bikini.png) | [<NSFW, click to see>](12240/previews/bondage.png) | ![free-12240](12240/previews/free.png) | ![maid-12240](12240/previews/maid.png) | ![miko-12240](12240/previews/miko.png) | [<NSFW, click to see>](12240/previews/nude.png) | [<NSFW, click to see>](12240/previews/nude2.png) | ![suit-12240](12240/previews/suit.png) | ![yukata-12240](12240/previews/yukata.png) | | 11220 | 0.945 | [Download](11220/elaina_majonotabitabi.zip) | ![pattern_1-11220](11220/previews/pattern_1.png) | ![pattern_2-11220](11220/previews/pattern_2.png) | ![pattern_3-11220](11220/previews/pattern_3.png) | ![pattern_4-11220](11220/previews/pattern_4.png) | ![pattern_5-11220](11220/previews/pattern_5.png) | ![pattern_6-11220](11220/previews/pattern_6.png) | ![pattern_7-11220](11220/previews/pattern_7.png) | ![pattern_8-11220](11220/previews/pattern_8.png) | ![pattern_9-11220](11220/previews/pattern_9.png) | ![pattern_10-11220](11220/previews/pattern_10.png) | ![pattern_11-11220](11220/previews/pattern_11.png) | ![pattern_12-11220](11220/previews/pattern_12.png) | ![pattern_13-11220](11220/previews/pattern_13.png) | ![pattern_14-11220](11220/previews/pattern_14.png) | ![pattern_15-11220](11220/previews/pattern_15.png) | ![pattern_16-11220](11220/previews/pattern_16.png) | ![pattern_17-11220](11220/previews/pattern_17.png) | ![pattern_18-11220](11220/previews/pattern_18.png) | ![pattern_19-11220](11220/previews/pattern_19.png) | ![pattern_20-11220](11220/previews/pattern_20.png) | ![pattern_21-11220](11220/previews/pattern_21.png) | ![pattern_22-11220](11220/previews/pattern_22.png) | ![bikini-11220](11220/previews/bikini.png) | [<NSFW, click to see>](11220/previews/bondage.png) | ![free-11220](11220/previews/free.png) | ![maid-11220](11220/previews/maid.png) | ![miko-11220](11220/previews/miko.png) | [<NSFW, click to see>](11220/previews/nude.png) | [<NSFW, click to see>](11220/previews/nude2.png) | ![suit-11220](11220/previews/suit.png) | ![yukata-11220](11220/previews/yukata.png) | | 10200 | 0.944 | [Download](10200/elaina_majonotabitabi.zip) | ![pattern_1-10200](10200/previews/pattern_1.png) | ![pattern_2-10200](10200/previews/pattern_2.png) | ![pattern_3-10200](10200/previews/pattern_3.png) | ![pattern_4-10200](10200/previews/pattern_4.png) | ![pattern_5-10200](10200/previews/pattern_5.png) | ![pattern_6-10200](10200/previews/pattern_6.png) | ![pattern_7-10200](10200/previews/pattern_7.png) | ![pattern_8-10200](10200/previews/pattern_8.png) | ![pattern_9-10200](10200/previews/pattern_9.png) | ![pattern_10-10200](10200/previews/pattern_10.png) | ![pattern_11-10200](10200/previews/pattern_11.png) | ![pattern_12-10200](10200/previews/pattern_12.png) | ![pattern_13-10200](10200/previews/pattern_13.png) | ![pattern_14-10200](10200/previews/pattern_14.png) | ![pattern_15-10200](10200/previews/pattern_15.png) | ![pattern_16-10200](10200/previews/pattern_16.png) | ![pattern_17-10200](10200/previews/pattern_17.png) | ![pattern_18-10200](10200/previews/pattern_18.png) | ![pattern_19-10200](10200/previews/pattern_19.png) | ![pattern_20-10200](10200/previews/pattern_20.png) | ![pattern_21-10200](10200/previews/pattern_21.png) | ![pattern_22-10200](10200/previews/pattern_22.png) | ![bikini-10200](10200/previews/bikini.png) | [<NSFW, click to see>](10200/previews/bondage.png) | ![free-10200](10200/previews/free.png) | ![maid-10200](10200/previews/maid.png) | ![miko-10200](10200/previews/miko.png) | [<NSFW, click to see>](10200/previews/nude.png) | [<NSFW, click to see>](10200/previews/nude2.png) | ![suit-10200](10200/previews/suit.png) | ![yukata-10200](10200/previews/yukata.png) | | 9180 | 0.949 | [Download](9180/elaina_majonotabitabi.zip) | ![pattern_1-9180](9180/previews/pattern_1.png) | ![pattern_2-9180](9180/previews/pattern_2.png) | ![pattern_3-9180](9180/previews/pattern_3.png) | ![pattern_4-9180](9180/previews/pattern_4.png) | ![pattern_5-9180](9180/previews/pattern_5.png) | ![pattern_6-9180](9180/previews/pattern_6.png) | ![pattern_7-9180](9180/previews/pattern_7.png) | ![pattern_8-9180](9180/previews/pattern_8.png) | ![pattern_9-9180](9180/previews/pattern_9.png) | ![pattern_10-9180](9180/previews/pattern_10.png) | ![pattern_11-9180](9180/previews/pattern_11.png) | ![pattern_12-9180](9180/previews/pattern_12.png) | ![pattern_13-9180](9180/previews/pattern_13.png) | ![pattern_14-9180](9180/previews/pattern_14.png) | ![pattern_15-9180](9180/previews/pattern_15.png) | ![pattern_16-9180](9180/previews/pattern_16.png) | ![pattern_17-9180](9180/previews/pattern_17.png) | ![pattern_18-9180](9180/previews/pattern_18.png) | ![pattern_19-9180](9180/previews/pattern_19.png) | ![pattern_20-9180](9180/previews/pattern_20.png) | ![pattern_21-9180](9180/previews/pattern_21.png) | ![pattern_22-9180](9180/previews/pattern_22.png) | ![bikini-9180](9180/previews/bikini.png) | [<NSFW, click to see>](9180/previews/bondage.png) | ![free-9180](9180/previews/free.png) | ![maid-9180](9180/previews/maid.png) | ![miko-9180](9180/previews/miko.png) | [<NSFW, click to see>](9180/previews/nude.png) | [<NSFW, click to see>](9180/previews/nude2.png) | ![suit-9180](9180/previews/suit.png) | ![yukata-9180](9180/previews/yukata.png) | | 8160 | 0.949 | [Download](8160/elaina_majonotabitabi.zip) | ![pattern_1-8160](8160/previews/pattern_1.png) | ![pattern_2-8160](8160/previews/pattern_2.png) | ![pattern_3-8160](8160/previews/pattern_3.png) | ![pattern_4-8160](8160/previews/pattern_4.png) | ![pattern_5-8160](8160/previews/pattern_5.png) | ![pattern_6-8160](8160/previews/pattern_6.png) | ![pattern_7-8160](8160/previews/pattern_7.png) | ![pattern_8-8160](8160/previews/pattern_8.png) | ![pattern_9-8160](8160/previews/pattern_9.png) | ![pattern_10-8160](8160/previews/pattern_10.png) | ![pattern_11-8160](8160/previews/pattern_11.png) | ![pattern_12-8160](8160/previews/pattern_12.png) | ![pattern_13-8160](8160/previews/pattern_13.png) | ![pattern_14-8160](8160/previews/pattern_14.png) | ![pattern_15-8160](8160/previews/pattern_15.png) | ![pattern_16-8160](8160/previews/pattern_16.png) | ![pattern_17-8160](8160/previews/pattern_17.png) | ![pattern_18-8160](8160/previews/pattern_18.png) | ![pattern_19-8160](8160/previews/pattern_19.png) | ![pattern_20-8160](8160/previews/pattern_20.png) | ![pattern_21-8160](8160/previews/pattern_21.png) | ![pattern_22-8160](8160/previews/pattern_22.png) | ![bikini-8160](8160/previews/bikini.png) | [<NSFW, click to see>](8160/previews/bondage.png) | ![free-8160](8160/previews/free.png) | ![maid-8160](8160/previews/maid.png) | ![miko-8160](8160/previews/miko.png) | [<NSFW, click to see>](8160/previews/nude.png) | [<NSFW, click to see>](8160/previews/nude2.png) | ![suit-8160](8160/previews/suit.png) | ![yukata-8160](8160/previews/yukata.png) | | **7140** | **0.953** | [**Download**](7140/elaina_majonotabitabi.zip) | ![pattern_1-7140](7140/previews/pattern_1.png) | ![pattern_2-7140](7140/previews/pattern_2.png) | ![pattern_3-7140](7140/previews/pattern_3.png) | ![pattern_4-7140](7140/previews/pattern_4.png) | ![pattern_5-7140](7140/previews/pattern_5.png) | ![pattern_6-7140](7140/previews/pattern_6.png) | ![pattern_7-7140](7140/previews/pattern_7.png) | ![pattern_8-7140](7140/previews/pattern_8.png) | ![pattern_9-7140](7140/previews/pattern_9.png) | ![pattern_10-7140](7140/previews/pattern_10.png) | ![pattern_11-7140](7140/previews/pattern_11.png) | ![pattern_12-7140](7140/previews/pattern_12.png) | ![pattern_13-7140](7140/previews/pattern_13.png) | ![pattern_14-7140](7140/previews/pattern_14.png) | ![pattern_15-7140](7140/previews/pattern_15.png) | ![pattern_16-7140](7140/previews/pattern_16.png) | ![pattern_17-7140](7140/previews/pattern_17.png) | ![pattern_18-7140](7140/previews/pattern_18.png) | ![pattern_19-7140](7140/previews/pattern_19.png) | ![pattern_20-7140](7140/previews/pattern_20.png) | ![pattern_21-7140](7140/previews/pattern_21.png) | ![pattern_22-7140](7140/previews/pattern_22.png) | ![bikini-7140](7140/previews/bikini.png) | [<NSFW, click to see>](7140/previews/bondage.png) | ![free-7140](7140/previews/free.png) | ![maid-7140](7140/previews/maid.png) | ![miko-7140](7140/previews/miko.png) | [<NSFW, click to see>](7140/previews/nude.png) | [<NSFW, click to see>](7140/previews/nude2.png) | ![suit-7140](7140/previews/suit.png) | ![yukata-7140](7140/previews/yukata.png) | | 6120 | 0.952 | [Download](6120/elaina_majonotabitabi.zip) | ![pattern_1-6120](6120/previews/pattern_1.png) | ![pattern_2-6120](6120/previews/pattern_2.png) | ![pattern_3-6120](6120/previews/pattern_3.png) | ![pattern_4-6120](6120/previews/pattern_4.png) | ![pattern_5-6120](6120/previews/pattern_5.png) | ![pattern_6-6120](6120/previews/pattern_6.png) | ![pattern_7-6120](6120/previews/pattern_7.png) | ![pattern_8-6120](6120/previews/pattern_8.png) | ![pattern_9-6120](6120/previews/pattern_9.png) | ![pattern_10-6120](6120/previews/pattern_10.png) | ![pattern_11-6120](6120/previews/pattern_11.png) | ![pattern_12-6120](6120/previews/pattern_12.png) | ![pattern_13-6120](6120/previews/pattern_13.png) | ![pattern_14-6120](6120/previews/pattern_14.png) | ![pattern_15-6120](6120/previews/pattern_15.png) | ![pattern_16-6120](6120/previews/pattern_16.png) | ![pattern_17-6120](6120/previews/pattern_17.png) | ![pattern_18-6120](6120/previews/pattern_18.png) | ![pattern_19-6120](6120/previews/pattern_19.png) | ![pattern_20-6120](6120/previews/pattern_20.png) | ![pattern_21-6120](6120/previews/pattern_21.png) | ![pattern_22-6120](6120/previews/pattern_22.png) | ![bikini-6120](6120/previews/bikini.png) | [<NSFW, click to see>](6120/previews/bondage.png) | ![free-6120](6120/previews/free.png) | ![maid-6120](6120/previews/maid.png) | ![miko-6120](6120/previews/miko.png) | [<NSFW, click to see>](6120/previews/nude.png) | [<NSFW, click to see>](6120/previews/nude2.png) | ![suit-6120](6120/previews/suit.png) | ![yukata-6120](6120/previews/yukata.png) | | 5100 | 0.930 | [Download](5100/elaina_majonotabitabi.zip) | ![pattern_1-5100](5100/previews/pattern_1.png) | ![pattern_2-5100](5100/previews/pattern_2.png) | ![pattern_3-5100](5100/previews/pattern_3.png) | ![pattern_4-5100](5100/previews/pattern_4.png) | ![pattern_5-5100](5100/previews/pattern_5.png) | ![pattern_6-5100](5100/previews/pattern_6.png) | ![pattern_7-5100](5100/previews/pattern_7.png) | ![pattern_8-5100](5100/previews/pattern_8.png) | ![pattern_9-5100](5100/previews/pattern_9.png) | ![pattern_10-5100](5100/previews/pattern_10.png) | ![pattern_11-5100](5100/previews/pattern_11.png) | ![pattern_12-5100](5100/previews/pattern_12.png) | ![pattern_13-5100](5100/previews/pattern_13.png) | ![pattern_14-5100](5100/previews/pattern_14.png) | ![pattern_15-5100](5100/previews/pattern_15.png) | ![pattern_16-5100](5100/previews/pattern_16.png) | ![pattern_17-5100](5100/previews/pattern_17.png) | ![pattern_18-5100](5100/previews/pattern_18.png) | ![pattern_19-5100](5100/previews/pattern_19.png) | ![pattern_20-5100](5100/previews/pattern_20.png) | ![pattern_21-5100](5100/previews/pattern_21.png) | ![pattern_22-5100](5100/previews/pattern_22.png) | ![bikini-5100](5100/previews/bikini.png) | [<NSFW, click to see>](5100/previews/bondage.png) | ![free-5100](5100/previews/free.png) | ![maid-5100](5100/previews/maid.png) | ![miko-5100](5100/previews/miko.png) | [<NSFW, click to see>](5100/previews/nude.png) | [<NSFW, click to see>](5100/previews/nude2.png) | ![suit-5100](5100/previews/suit.png) | ![yukata-5100](5100/previews/yukata.png) | | 4080 | 0.951 | [Download](4080/elaina_majonotabitabi.zip) | ![pattern_1-4080](4080/previews/pattern_1.png) | ![pattern_2-4080](4080/previews/pattern_2.png) | ![pattern_3-4080](4080/previews/pattern_3.png) | ![pattern_4-4080](4080/previews/pattern_4.png) | ![pattern_5-4080](4080/previews/pattern_5.png) | ![pattern_6-4080](4080/previews/pattern_6.png) | ![pattern_7-4080](4080/previews/pattern_7.png) | ![pattern_8-4080](4080/previews/pattern_8.png) | ![pattern_9-4080](4080/previews/pattern_9.png) | ![pattern_10-4080](4080/previews/pattern_10.png) | ![pattern_11-4080](4080/previews/pattern_11.png) | ![pattern_12-4080](4080/previews/pattern_12.png) | ![pattern_13-4080](4080/previews/pattern_13.png) | ![pattern_14-4080](4080/previews/pattern_14.png) | ![pattern_15-4080](4080/previews/pattern_15.png) | ![pattern_16-4080](4080/previews/pattern_16.png) | ![pattern_17-4080](4080/previews/pattern_17.png) | ![pattern_18-4080](4080/previews/pattern_18.png) | ![pattern_19-4080](4080/previews/pattern_19.png) | ![pattern_20-4080](4080/previews/pattern_20.png) | ![pattern_21-4080](4080/previews/pattern_21.png) | ![pattern_22-4080](4080/previews/pattern_22.png) | ![bikini-4080](4080/previews/bikini.png) | [<NSFW, click to see>](4080/previews/bondage.png) | ![free-4080](4080/previews/free.png) | ![maid-4080](4080/previews/maid.png) | ![miko-4080](4080/previews/miko.png) | [<NSFW, click to see>](4080/previews/nude.png) | [<NSFW, click to see>](4080/previews/nude2.png) | ![suit-4080](4080/previews/suit.png) | ![yukata-4080](4080/previews/yukata.png) | | 3060 | 0.952 | [Download](3060/elaina_majonotabitabi.zip) | ![pattern_1-3060](3060/previews/pattern_1.png) | ![pattern_2-3060](3060/previews/pattern_2.png) | ![pattern_3-3060](3060/previews/pattern_3.png) | ![pattern_4-3060](3060/previews/pattern_4.png) | ![pattern_5-3060](3060/previews/pattern_5.png) | ![pattern_6-3060](3060/previews/pattern_6.png) | ![pattern_7-3060](3060/previews/pattern_7.png) | ![pattern_8-3060](3060/previews/pattern_8.png) | ![pattern_9-3060](3060/previews/pattern_9.png) | ![pattern_10-3060](3060/previews/pattern_10.png) | ![pattern_11-3060](3060/previews/pattern_11.png) | ![pattern_12-3060](3060/previews/pattern_12.png) | ![pattern_13-3060](3060/previews/pattern_13.png) | ![pattern_14-3060](3060/previews/pattern_14.png) | ![pattern_15-3060](3060/previews/pattern_15.png) | ![pattern_16-3060](3060/previews/pattern_16.png) | ![pattern_17-3060](3060/previews/pattern_17.png) | ![pattern_18-3060](3060/previews/pattern_18.png) | ![pattern_19-3060](3060/previews/pattern_19.png) | ![pattern_20-3060](3060/previews/pattern_20.png) | ![pattern_21-3060](3060/previews/pattern_21.png) | ![pattern_22-3060](3060/previews/pattern_22.png) | ![bikini-3060](3060/previews/bikini.png) | [<NSFW, click to see>](3060/previews/bondage.png) | ![free-3060](3060/previews/free.png) | ![maid-3060](3060/previews/maid.png) | ![miko-3060](3060/previews/miko.png) | [<NSFW, click to see>](3060/previews/nude.png) | [<NSFW, click to see>](3060/previews/nude2.png) | ![suit-3060](3060/previews/suit.png) | ![yukata-3060](3060/previews/yukata.png) | | 2040 | 0.951 | [Download](2040/elaina_majonotabitabi.zip) | ![pattern_1-2040](2040/previews/pattern_1.png) | ![pattern_2-2040](2040/previews/pattern_2.png) | ![pattern_3-2040](2040/previews/pattern_3.png) | ![pattern_4-2040](2040/previews/pattern_4.png) | ![pattern_5-2040](2040/previews/pattern_5.png) | ![pattern_6-2040](2040/previews/pattern_6.png) | ![pattern_7-2040](2040/previews/pattern_7.png) | ![pattern_8-2040](2040/previews/pattern_8.png) | ![pattern_9-2040](2040/previews/pattern_9.png) | ![pattern_10-2040](2040/previews/pattern_10.png) | ![pattern_11-2040](2040/previews/pattern_11.png) | ![pattern_12-2040](2040/previews/pattern_12.png) | ![pattern_13-2040](2040/previews/pattern_13.png) | ![pattern_14-2040](2040/previews/pattern_14.png) | ![pattern_15-2040](2040/previews/pattern_15.png) | ![pattern_16-2040](2040/previews/pattern_16.png) | ![pattern_17-2040](2040/previews/pattern_17.png) | ![pattern_18-2040](2040/previews/pattern_18.png) | ![pattern_19-2040](2040/previews/pattern_19.png) | ![pattern_20-2040](2040/previews/pattern_20.png) | ![pattern_21-2040](2040/previews/pattern_21.png) | ![pattern_22-2040](2040/previews/pattern_22.png) | ![bikini-2040](2040/previews/bikini.png) | [<NSFW, click to see>](2040/previews/bondage.png) | ![free-2040](2040/previews/free.png) | ![maid-2040](2040/previews/maid.png) | ![miko-2040](2040/previews/miko.png) | [<NSFW, click to see>](2040/previews/nude.png) | [<NSFW, click to see>](2040/previews/nude2.png) | ![suit-2040](2040/previews/suit.png) | ![yukata-2040](2040/previews/yukata.png) | | 1020 | 0.941 | [Download](1020/elaina_majonotabitabi.zip) | ![pattern_1-1020](1020/previews/pattern_1.png) | ![pattern_2-1020](1020/previews/pattern_2.png) | ![pattern_3-1020](1020/previews/pattern_3.png) | ![pattern_4-1020](1020/previews/pattern_4.png) | ![pattern_5-1020](1020/previews/pattern_5.png) | ![pattern_6-1020](1020/previews/pattern_6.png) | ![pattern_7-1020](1020/previews/pattern_7.png) | ![pattern_8-1020](1020/previews/pattern_8.png) | ![pattern_9-1020](1020/previews/pattern_9.png) | ![pattern_10-1020](1020/previews/pattern_10.png) | ![pattern_11-1020](1020/previews/pattern_11.png) | ![pattern_12-1020](1020/previews/pattern_12.png) | ![pattern_13-1020](1020/previews/pattern_13.png) | ![pattern_14-1020](1020/previews/pattern_14.png) | ![pattern_15-1020](1020/previews/pattern_15.png) | ![pattern_16-1020](1020/previews/pattern_16.png) | ![pattern_17-1020](1020/previews/pattern_17.png) | ![pattern_18-1020](1020/previews/pattern_18.png) | ![pattern_19-1020](1020/previews/pattern_19.png) | ![pattern_20-1020](1020/previews/pattern_20.png) | ![pattern_21-1020](1020/previews/pattern_21.png) | ![pattern_22-1020](1020/previews/pattern_22.png) | ![bikini-1020](1020/previews/bikini.png) | [<NSFW, click to see>](1020/previews/bondage.png) | ![free-1020](1020/previews/free.png) | ![maid-1020](1020/previews/maid.png) | ![miko-1020](1020/previews/miko.png) | [<NSFW, click to see>](1020/previews/nude.png) | [<NSFW, click to see>](1020/previews/nude2.png) | ![suit-1020](1020/previews/suit.png) | ![yukata-1020](1020/previews/yukata.png) |
ntc-ai/SDXL-LoRA-slider.soulful
ntc-ai
2024-01-06T17:08:32Z
139
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-06T17:08:29Z
--- language: - en thumbnail: "images/evaluate/soulful.../soulful_17_3.0.png" widget: - text: soulful output: url: images/soulful_17_3.0.png - text: soulful output: url: images/soulful_19_3.0.png - text: soulful output: url: images/soulful_20_3.0.png - text: soulful output: url: images/soulful_21_3.0.png - text: soulful output: url: images/soulful_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "soulful" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - soulful (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/soulful_17_-3.0.png" width=256 height=256 /> | <img src="images/soulful_17_0.0.png" width=256 height=256 /> | <img src="images/soulful_17_3.0.png" width=256 height=256 /> | | <img src="images/soulful_19_-3.0.png" width=256 height=256 /> | <img src="images/soulful_19_0.0.png" width=256 height=256 /> | <img src="images/soulful_19_3.0.png" width=256 height=256 /> | | <img src="images/soulful_20_-3.0.png" width=256 height=256 /> | <img src="images/soulful_20_0.0.png" width=256 height=256 /> | <img src="images/soulful_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` soulful ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.soulful', weight_name='soulful.safetensors', adapter_name="soulful") # Activate the LoRA pipe.set_adapters(["soulful"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, soulful" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 900+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
kikim6114/distilbert-base-uncased-finetuned-emotion
kikim6114
2024-01-06T17:07:12Z
93
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-04T15:17:27Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9248990116000972 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2129 - Accuracy: 0.925 - F1: 0.9249 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8234 | 1.0 | 250 | 0.2981 | 0.9135 | 0.9129 | | 0.2432 | 2.0 | 500 | 0.2129 | 0.925 | 0.9249 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.0 - Datasets 2.16.1 - Tokenizers 0.13.0.dev0
Johnlhugface/rl_course_vizdoom_health_gathering_supreme
Johnlhugface
2024-01-06T17:06:57Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-06T17:06:28Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 6.93 +/- 2.75 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Johnlhugface/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
prithivee/distilbert-base-uncased-lora-text-classification
prithivee
2024-01-06T17:05:52Z
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:adapter:distilbert/distilbert-base-uncased", "license:apache-2.0", "region:us" ]
null
2024-01-06T17:02:46Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: distilbert-base-uncased model-index: - name: distilbert-base-uncased-lora-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-lora-text-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4946 - Accuracy: {'accuracy': 0.891} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-------------------:| | No log | 1.0 | 250 | 0.3426 | {'accuracy': 0.88} | | 0.3991 | 2.0 | 500 | 0.5703 | {'accuracy': 0.874} | | 0.3991 | 3.0 | 750 | 0.4946 | {'accuracy': 0.891} | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
TinyPixel/qwen-1.8B-OrcaMini
TinyPixel
2024-01-06T17:01:31Z
17
0
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "dataset:TinyPixel/orca-bad", "autotrain_compatible", "region:us" ]
text-generation
2024-01-06T15:59:48Z
--- datasets: - TinyPixel/orca-bad --- ## Usage ```python !pip install -q -U trl transformers accelerate git+https://github.com/huggingface/peft.git !pip install -q datasets bitsandbytes einops wandb sentencepiece transformers_stream_generator tiktoken from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("TinyPixel/qwen-1.8B-OrcaMini", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("TinyPixel/qwen-1.8B-OrcaMini", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True) device = "cuda:0" text = '''SYSTEM: USER: what is the difference between a dog and a cat on a biological level? ASSISTANT:''' inputs = tokenizer(text, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, top_p=0.95, temperature=0.7, top_k=50) print(tokenizer.decode(outputs[0], skip_special_tokens=False)) ```
detakarang/Phixphi-4x2.7b
detakarang
2024-01-06T16:56:57Z
17
0
transformers
[ "transformers", "safetensors", "phi-msft", "text-generation", "merge", "mergekit", "lazymergekit", "mrm8488/phi-2-coder", "microsoft/phi-2", "Yhyu13/phi-2-sft-dpo-gpt4_en-ep1", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T16:55:21Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mrm8488/phi-2-coder - microsoft/phi-2 - Yhyu13/phi-2-sft-dpo-gpt4_en-ep1 --- # Phixphi-4x2.7b This model is a merge of the following models made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder) * [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) * [Yhyu13/phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1) ## 🧩 Configuration ```yaml models: - model: mrm8488/phi-2-coder parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: microsoft/phi-2 parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: dare_ties base_model: cognitivecomputations/dolphin-2_6-phi-2 parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "detakarang/Phixphi-4x2.7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ostapeno/newt_adaNeo1B_ropes_prompt_beginning_sbs0.5_svdemb_sgd_full_ft_finegrained
ostapeno
2024-01-06T16:54:42Z
0
0
null
[ "region:us" ]
null
2024-01-06T11:31:21Z
Number of experts present in the library: 10 | Expert Name | Base Model | Trained on | Adapter Type | | --- | --- | --- | --- | | ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v8 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | | ropes_prompt_beginning_v9 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora | Last updated on: 2024-01-06 16:54:42+00:00
mtc/mistralai-Mistral-7B-v0.1-pubmed-summarization-5000-v2-qlora-4bit
mtc
2024-01-06T16:47:19Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-06T16:46:38Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
roktimsardar123/client001
roktimsardar123
2024-01-06T16:45:19Z
2
1
diffusers
[ "diffusers", "text-to-image", "autotrain", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
2024-01-06T15:29:37Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: photo of a apxu woman tags: - text-to-image - diffusers - autotrain inference: true --- # DreamBooth trained by AutoTrain Text encoder was not trained.
thierryteisseire/Llama-2-7b-chat-hf-fine-tuned-adapters
thierryteisseire
2024-01-06T16:41:21Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-06T13:02:12Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
louistichelman/BART-finetuned-on-training-knowledge
louistichelman
2024-01-06T16:33:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-06T12:43:07Z
--- license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer metrics: - bleu model-index: - name: BART-finetuned-on-training-knowledge results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART-finetuned-on-training-knowledge This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1989 - Bleu: 3.6495 - Gen Len: 19.6357 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | 2.4258 | 1.0 | 1679 | 2.2498 | 3.042 | 19.3821 | | 2.0762 | 2.0 | 3358 | 2.1989 | 3.6495 | 19.6357 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.0.1+cu117 - Datasets 2.13.0 - Tokenizers 0.14.1
MaVier19/zero-shot_text_classification_pre_trained
MaVier19
2024-01-06T16:30:25Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "base_model:finetune:MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-06T16:22:05Z
--- license: mit base_model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: zero-shot_text_classification_pre_trained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zero-shot_text_classification_pre_trained This model is a fine-tuned version of [MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8939 - Accuracy: 0.695 - F1: 0.6917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.7346 | 1.0 | 750 | 0.8939 | 0.695 | 0.6917 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Johnlhugface/LunarLander-v2
Johnlhugface
2024-01-06T16:25:32Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-01-06T16:25:26Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -258.68 +/- 162.92 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'exp_name': 'test1' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 1 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'f': None 'repo_id': 'Johnlhugface/LunarLander-v2' 'batch_size': 128 'minibatch_size': 32} ```
kamaltdin/stable_diffusion_models
kamaltdin
2024-01-06T16:24:32Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-15T07:48:37Z
--- license: creativeml-openrail-m ---