modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-13 18:27:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
425 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-13 18:24:29
card
stringlengths
11
1.01M
ZhaoxiZheng/whisper-tiny
ZhaoxiZheng
"2025-01-07T19:42:43Z"
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-01-07T00:19:19Z"
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.32762691853600945 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6637 - Wer Ortho: 0.3263 - Wer: 0.3276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:| | 1.3521 | 1.7857 | 50 | 0.5871 | 0.4127 | 0.3849 | | 0.2839 | 3.5714 | 100 | 0.4864 | 0.3356 | 0.3300 | | 0.0983 | 5.3571 | 150 | 0.5188 | 0.3387 | 0.3270 | | 0.0285 | 7.1429 | 200 | 0.5651 | 0.3282 | 0.3164 | | 0.0064 | 8.9286 | 250 | 0.5842 | 0.3152 | 0.3123 | | 0.0021 | 10.7143 | 300 | 0.6164 | 0.3313 | 0.3312 | | 0.0013 | 12.5 | 350 | 0.6319 | 0.3263 | 0.3259 | | 0.0009 | 14.2857 | 400 | 0.6441 | 0.3245 | 0.3235 | | 0.0007 | 16.0714 | 450 | 0.6542 | 0.3251 | 0.3241 | | 0.0006 | 17.8571 | 500 | 0.6637 | 0.3263 | 0.3276 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
Jovie/Robotics
Jovie
"2024-09-25T18:59:05Z"
20
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "region:us" ]
text-to-image
"2024-09-23T17:45:32Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- closeup portrait photo of an Elysium Robot Cyborg Samurai, macro, a captivating vibrant dark capturing the essence of a cyborg Bedouin sorcerer in fight stance, Kneeling infront of japanese shire. ethereal, smoky backdrop. throwing a translucent red/tanslucent amber/black, weapon, katana, holding katana, atmospheric haze, Film grain, cinematic film still, shallow depth of field, highly detailed, high budget, cinemascope, moody, epic, OverallDetail, gorgeous, 2000s vintage RAW photo, photorealistic, candid camera, color graded cinematic, eye catchlights, atmospheric lighting, skin pores, imperfections, natural, shallow dof, output: url: images/example_bhiohvbzi.png base_model: black-forest-labs/FLUX.1-schnell instance_prompt: cyberpunk edgerunners --- # robotics model style <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jovie/Robotics/tree/main) them in the Files & versions tab.
IrwinD/log_sage_ppo_model
IrwinD
"2024-04-26T01:55:50Z"
112
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "trl", "ppo", "reinforcement-learning", "summarization", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
"2024-04-23T04:18:16Z"
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning pipeline_tag: summarization --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="IrwinD//tmp/tmpoz9k3o9o/IrwinD/log_sage_ppo_model") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("IrwinD//tmp/tmpoz9k3o9o/IrwinD/log_sage_ppo_model") model = AutoModelForCausalLMWithValueHead.from_pretrained("IrwinD//tmp/tmpoz9k3o9o/IrwinD/log_sage_ppo_model") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
codegood/GPT2
codegood
"2024-06-17T03:45:28Z"
79
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-17T03:45:17Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GordonChang/gemma3-12b-it-finetuned-v1-merged
GordonChang
"2025-03-26T03:06:07Z"
0
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-26T02:31:24Z"
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** GordonChang - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
silviasapora/gemma-7b-silvia-basic-5e-5-05-vshp2
silviasapora
"2025-02-26T21:51:29Z"
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma", "text-generation", "generated_from_trainer", "alignment-handbook", "trl", "orpo", "conversational", "dataset:argilla/dpo-mix-7k", "arxiv:2403.07691", "base_model:google/gemma-7b", "base_model:finetune:google/gemma-7b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-19T18:49:44Z"
--- base_model: google/gemma-7b datasets: - argilla/dpo-mix-7k library_name: transformers model_name: google/gemma-7b tags: - generated_from_trainer - alignment-handbook - trl - orpo licence: license --- # Model Card for google/gemma-7b This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vshp2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/4uyt69lx) This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691). ### Framework versions - TRL: 0.13.0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite ORPO as: ```bibtex @article{hong2024orpo, title = {{ORPO: Monolithic Preference Optimization without Reference Model}}, author = {Jiwoo Hong and Noah Lee and James Thorne}, year = 2024, eprint = {arXiv:2403.07691} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
trenden/6cb2254f-ce3b-4df8-8168-2234cfe0f843
trenden
"2025-02-23T10:24:03Z"
0
0
peft
[ "peft", "llama", "generated_from_trainer", "base_model:unsloth/tinyllama", "base_model:adapter:unsloth/tinyllama", "region:us" ]
null
"2025-02-23T10:23:56Z"
--- library_name: peft tags: - generated_from_trainer base_model: unsloth/tinyllama model-index: - name: trenden/6cb2254f-ce3b-4df8-8168-2234cfe0f843 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trenden/6cb2254f-ce3b-4df8-8168-2234cfe0f843 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Frorozcol/Taxi-v3
Frorozcol
"2023-02-21T14:53:32Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-21T14:53:29Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Frorozcol/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Yuriy81/ppo-LunarLander-v2
Yuriy81
"2024-01-31T09:49:06Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-01-31T09:48:45Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 261.24 +/- 9.76 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ar4ikov/civitai_prompts_falcon_15k_v2_4bit
Ar4ikov
"2023-08-12T15:16:47Z"
11
1
peft
[ "peft", "region:us" ]
null
"2023-08-12T15:16:43Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
NasimB/gpt2-concat-guten-rarity-all-3p5k-1p8k
NasimB
"2023-07-08T22:49:08Z"
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-08T20:51:13Z"
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-guten-rarity-all-3p5k-1p8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-guten-rarity-all-3p5k-1p8k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.1924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.702 | 0.29 | 500 | 5.6455 | | 5.3702 | 0.59 | 1000 | 5.2062 | | 5.0235 | 0.88 | 1500 | 4.9548 | | 4.7448 | 1.18 | 2000 | 4.8046 | | 4.5901 | 1.47 | 2500 | 4.6826 | | 4.4798 | 1.77 | 3000 | 4.5785 | | 4.3425 | 2.06 | 3500 | 4.5017 | | 4.1565 | 2.36 | 4000 | 4.4481 | | 4.1361 | 2.65 | 4500 | 4.3913 | | 4.0872 | 2.95 | 5000 | 4.3408 | | 3.8648 | 3.24 | 5500 | 4.3344 | | 3.8269 | 3.54 | 6000 | 4.3033 | | 3.812 | 3.83 | 6500 | 4.2685 | | 3.682 | 4.12 | 7000 | 4.2696 | | 3.5391 | 4.42 | 7500 | 4.2633 | | 3.534 | 4.71 | 8000 | 4.2464 | | 3.5219 | 5.01 | 8500 | 4.2386 | | 3.346 | 5.3 | 9000 | 4.2473 | | 3.3421 | 5.6 | 9500 | 4.2453 | | 3.3464 | 5.89 | 10000 | 4.2450 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
team-sanai/zoo_novel_expert
team-sanai
"2024-05-21T09:56:49Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-05-21T09:53:19Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
eriksu/heiko-7b
eriksu
"2024-03-02T18:42:26Z"
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-02T18:38:04Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
int1306866/0b4b5d88-d15e-4067-a47f-fca0f20c6828
int1306866
"2025-03-30T15:06:03Z"
0
0
null
[ "region:us" ]
null
"2025-03-30T15:05:20Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF
tensorblock
"2024-11-16T01:47:26Z"
40
0
null
[ "gguf", "finetuned", "text-generation", "TensorBlock", "GGUF", "en", "ko", "dataset:royboy0416/ko-alpaca", "base_model:refarde/OPEN-SOLAR-KO-10.7B-S-Core", "base_model:quantized:refarde/OPEN-SOLAR-KO-10.7B-S-Core", "license:apache-2.0", "region:us" ]
text-generation
"2024-11-15T11:40:41Z"
--- base_model: refarde/OPEN-SOLAR-KO-10.7B-S-Core license: apache-2.0 pipeline_tag: text-generation language: - en - ko tags: - finetuned - text-generation - TensorBlock - GGUF datasets: - royboy0416/ko-alpaca inference: false model_type: mixtral --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## refarde/OPEN-SOLAR-KO-10.7B-S-Core - GGUF This repo contains GGUF format model files for [refarde/OPEN-SOLAR-KO-10.7B-S-Core](https://huggingface.co/refarde/OPEN-SOLAR-KO-10.7B-S-Core). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [OPEN-SOLAR-KO-10.7B-S-Core-Q2_K.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q2_K.gguf) | Q2_K | 3.793 GB | smallest, significant quality loss - not recommended for most purposes | | [OPEN-SOLAR-KO-10.7B-S-Core-Q3_K_S.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q3_K_S.gguf) | Q3_K_S | 4.414 GB | very small, high quality loss | | [OPEN-SOLAR-KO-10.7B-S-Core-Q3_K_M.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q3_K_M.gguf) | Q3_K_M | 4.909 GB | very small, high quality loss | | [OPEN-SOLAR-KO-10.7B-S-Core-Q3_K_L.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q3_K_L.gguf) | Q3_K_L | 5.333 GB | small, substantial quality loss | | [OPEN-SOLAR-KO-10.7B-S-Core-Q4_0.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q4_0.gguf) | Q4_0 | 5.733 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [OPEN-SOLAR-KO-10.7B-S-Core-Q4_K_S.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q4_K_S.gguf) | Q4_K_S | 5.776 GB | small, greater quality loss | | [OPEN-SOLAR-KO-10.7B-S-Core-Q4_K_M.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q4_K_M.gguf) | Q4_K_M | 6.095 GB | medium, balanced quality - recommended | | [OPEN-SOLAR-KO-10.7B-S-Core-Q5_0.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q5_0.gguf) | Q5_0 | 6.974 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [OPEN-SOLAR-KO-10.7B-S-Core-Q5_K_S.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q5_K_S.gguf) | Q5_K_S | 6.974 GB | large, low quality loss - recommended | | [OPEN-SOLAR-KO-10.7B-S-Core-Q5_K_M.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q5_K_M.gguf) | Q5_K_M | 7.160 GB | large, very low quality loss - recommended | | [OPEN-SOLAR-KO-10.7B-S-Core-Q6_K.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q6_K.gguf) | Q6_K | 8.292 GB | very large, extremely low quality loss | | [OPEN-SOLAR-KO-10.7B-S-Core-Q8_0.gguf](https://huggingface.co/tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF/blob/main/OPEN-SOLAR-KO-10.7B-S-Core-Q8_0.gguf) | Q8_0 | 10.740 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF --include "OPEN-SOLAR-KO-10.7B-S-Core-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/OPEN-SOLAR-KO-10.7B-S-Core-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
QuantiPhy/aya-23-8B-8bq
QuantiPhy
"2024-06-25T16:43:11Z"
7
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-25T16:37:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ChristianAzinn/mxbai-embed-large-v1-gguf
ChristianAzinn
"2024-04-07T21:56:31Z"
701
2
sentence-transformers
[ "sentence-transformers", "gguf", "mteb", "transformers", "transformers.js", "feature-extraction", "en", "arxiv:2309.12871", "base_model:mixedbread-ai/mxbai-embed-large-v1", "base_model:quantized:mixedbread-ai/mxbai-embed-large-v1", "license:apache-2.0", "autotrain_compatible", "region:us" ]
feature-extraction
"2024-04-07T20:23:25Z"
--- base_model: mixedbread-ai/mxbai-embed-large-v1 inference: false language: - en license: apache-2.0 model_creator: mixedbread-ai model_name: mxbai-embed-large-v1 model_type: bert quantized_by: ChristianAzinn library_name: sentence-transformers pipeline_tag: feature-extraction tags: - mteb - transformers - transformers.js - gguf --- # mxbai-embed-large-v1-gguf Model creator: [MixedBread AI](https://huggingface.co/mixedbread-ai) Original model: [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) ## Original Description This is our base sentence embedding model. It was trained using [AnglE](https://arxiv.org/abs/2309.12871) loss on our high-quality large scale data. It achieves SOTA performance on BERT-large scale. Find out more in our [blog post](https://mixedbread.ai/blog/mxbai-embed-large-v1). ## Description This repo contains GGUF format files for the [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) embedding model. These files were converted and quantized with llama.cpp [PR 5500](https://github.com/ggerganov/llama.cpp/pull/5500), commit [34aa045de](https://github.com/ggerganov/llama.cpp/pull/5500/commits/34aa045de44271ff7ad42858c75739303b8dc6eb), on a consumer RTX 4090. This model supports up to 512 tokens of context. ## Compatibility These files are compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) as of commit [4524290e8](https://github.com/ggerganov/llama.cpp/commit/4524290e87b8e107cc2b56e1251751546f4b9051), as well as [LM Studio](https://lmstudio.ai/) as of version 0.2.19. # Meta-information ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> ## Provided Files | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ---- | | [mxbai-embed-large-v1.Q2_K.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q2_K.gguf) | Q2_K | 2 | 144 MB | smallest, significant quality loss - not recommended for most purposes | | [mxbai-embed-large-v1.Q3_K_S.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 160 MB | very small, high quality loss | | [mxbai-embed-large-v1.Q3_K_M.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 181 MB | very small, high quality loss | | [mxbai-embed-large-v1.Q3_K_L.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 198 MB | small, substantial quality loss | | [mxbai-embed-large-v1.Q4_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q4_0.gguf) | Q4_0 | 4 | 200 MB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mxbai-embed-large-v1.Q4_K_S.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 203 MB | small, greater quality loss | | [mxbai-embed-large-v1.Q4_K_M.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 216 MB | medium, balanced quality - recommended | | [mxbai-embed-large-v1.Q5_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q5_0.gguf) | Q5_0 | 5 | 237 MB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mxbai-embed-large-v1.Q5_K_S.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 237 MB | large, low quality loss - recommended | | [mxbai-embed-large-v1.Q5_K_M.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 246 MB | large, very low quality loss - recommended | | [mxbai-embed-large-v1.Q6_K.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q6_K.gguf) | Q6_K | 6 | 278 MB | very large, extremely low quality loss | | [mxbai-embed-large-v1.Q8_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1.Q8_0.gguf) | Q8_0 | 8 | 358 MB | very large, extremely low quality loss - recommended | | [mxbai-embed-large-v1.Q8_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1_fp16.gguf) | FP16 | 16 | 670 MB | enormous, pretty much the original model - not recommended | | [mxbai-embed-large-v1.Q8_0.gguf](https://huggingface.co/ChristianAzinn/mxbai-embed-large-v1-gguf/blob/main/mxbai-embed-large-v1_fp32.gguf) | FP32 | 32 | 1.34 GB | enormous, pretty much the original model - not recommended | # Examples ## Example Usage with `llama.cpp` To compute a single embedding, build llama.cpp and run: ```shell ./embedding -ngl 99 -m [filepath-to-gguf].gguf -p 'search_query: What is TSNE?' ``` You can also submit a batch of texts to embed, as long as the total number of tokens does not exceed the context length. Only the first three embeddings are shown by the `embedding` example. `texts.txt`: ``` search_query: What is TSNE? search_query: Who is Laurens Van der Maaten? ``` Compute multiple embeddings: ```shell ./embedding -ngl 99 -m [filepath-to-gguf].gguf -f texts.txt ``` ## Example Usage with LM Studio Download the 0.2.19 beta build from here: [Windows](https://releases.lmstudio.ai/windows/0.2.19/beta/LM-Studio-0.2.19-Setup-Preview-1.exe) [MacOS](https://releases.lmstudio.ai/mac/arm64/0.2.19/beta/LM-Studio-darwin-arm64-0.2.19-Preview-1.zip) [Linux](https://releases.lmstudio.ai/linux/0.2.19/beta/LM_Studio-0.2.19-Preview-1.AppImage) Once installed, open the app. The home should look like this: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/QGkYvH242S0c_clPqX9Ip.png) Search for either "ChristianAzinn" in the main search bar or go to the "Search" tab on the left menu and search the name there. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/11hLos1JNMyZ1q2K9ICss.png) Select your model from those that appear (this example uses `bge-small-en-v1.5-gguf`) and select which quantization you want to download. Since this model is pretty small, I recommend Q8_0, if not f16/32. Generally, the lower you go in the list (or the bigger the number gets), the larger the file and the better the performance. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/hu9DuVYahQ-QpII5P8mVD.png) You will see a green checkmark and the word "Downloaded" once the model has successfully downloaded, which can take some time depending on your network speeds. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/7fmXkLDmGTNVyG3oqB4--.png) Once this model is finished downloading, navigate to the "Local Server" tab on the left menu and open the loader for text embedding models. This loader does not appear before version 0.2.19, so ensure you downloaded the correct version. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/OrzvqQIhB9p-aMq2G6Lxd.png) Select the model you just downloaded from the dropdown that appears to load it. You may need to play with configuratios in the right-side menu, such as GPU offload if it doesn't fit entirely into VRAM. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/TM4dO4DFP1xqZD1GWBqeI.png) All that's left to do is to hit the "Start Server" button: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/6TZvnX84rZKZ0TwVVLFnw.png) And if you see text like that shown below in the console, you're good to go! You can use this as a drop-in replacement for the OpenAI embeddings API in any application that requires it, or you can query the endpoint directly to test it out. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6584f042b378d311dccea501/kD47WaH-tzpr4qaAm-pMn.png) Example curl request to the API endpoint: ```shell curl http://localhost:1234/v1/embeddings \ -H "Content-Type: application/json" \ -d '{ "input": "Your text string goes here", "model": "model-identifier-here" }' ``` For more information, see the LM Studio [text embedding documentation](https://lmstudio.ai/docs/text-embeddings). ## Acknowledgements Thanks to the LM Studio team and everyone else working on open-source AI. This README is inspired by that of [nomic-ai-embed-text-v1.5-GGUF](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF), another excellent embedding model, and those of the legendary [TheBloke](https://huggingface.co/TheBloke).
Nazzyk/ppo-LunarLander-v2
Nazzyk
"2023-03-24T23:58:05Z"
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-03-12T12:53:23Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 265.96 +/- 18.28 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Pochitas/ruBert7
Pochitas
"2025-02-23T14:17:43Z"
0
0
null
[ "text-classification", "ru", "base_model:DeepPavlov/rubert-base-cased", "base_model:finetune:DeepPavlov/rubert-base-cased", "region:us" ]
text-classification
"2025-02-23T14:04:06Z"
--- language: - ru base_model: - DeepPavlov/rubert-base-cased pipeline_tag: text-classification ---
lmqg/mt5-base-frquad-ae
lmqg
"2023-01-09T14:13:22Z"
106
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "answer extraction", "fr", "dataset:lmqg/qg_frquad", "arxiv:2210.03992", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-01-09T14:11:17Z"
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: fr datasets: - lmqg/qg_frquad pipeline_tag: text2text-generation tags: - answer extraction widget: - text: "Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées »." example_title: "Answering Extraction Example 1" - text: "Néanmoins, une fois encore, l'arithmétique modulaire est insuffisante pour venir à bout du théorème. Dirichlet utilise de nombreuses techniques analytiques, comme les séries entières et l'analyse complexe. Le fruit de ces travaux donne naissance à une nouvelle branche des mathématiques : la théorie analytique des nombres. L'un des points cruciaux de cette théorie provient de l'unique article de <hl> Bernhard Riemann <hl> en théorie des nombres : Sur le nombre de nombres premiers inférieurs à une taille donnée. Il conjecture une localisation des racines de sa fonction ζ. La recherche de la position des racines, initiée par Dirichlet, devient une préoccupation centrale et reste l'une des conjectures pressenties comme les plus difficiles des mathématiques de notre époque." example_title: "Answering Extraction Example 2" model-index: - name: lmqg/mt5-base-frquad-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_frquad type: default args: default metrics: - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 3.8 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 13.02 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 14.28 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 64.97 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 50.67 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 19.32 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 3.92 --- # Model Card of `lmqg/mt5-base-frquad-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mt5-base-frquad-ae") # model prediction answers = model.generate_a("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-frquad-ae") output = pipe("Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées ».") ``` ## Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-frquad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 3.92 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 19.32 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 64.97 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 7.64 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 5.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 4.65 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 3.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 14.28 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 50.67 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 13.02 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-frquad-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
minhnguyennnnnn/7304a969-cc0d-4b38-9ad4-6281b96400f9
minhnguyennnnnn
"2025-01-31T22:16:49Z"
6
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-31T20:36:18Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO tags: - axolotl - generated_from_trainer model-index: - name: 7304a969-cc0d-4b38-9ad4-6281b96400f9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 0965318bec140d7e_train_data.json ds_type: json format: custom path: /workspace/input_data/0965318bec140d7e_train_data.json type: field_input: input field_instruction: instruction field_output: responses format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: minhnguyennnnnn/7304a969-cc0d-4b38-9ad4-6281b96400f9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/0965318bec140d7e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 0a74c775-681d-4a4b-a101-ef46a668f347 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 0a74c775-681d-4a4b-a101-ef46a668f347 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 7304a969-cc0d-4b38-9ad4-6281b96400f9 This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.7509 | 0.0062 | 200 | 0.1693 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hiroki-rad/google-gemma-2-2b-128-ft-3000-prompt-changed
hiroki-rad
"2024-12-15T01:42:15Z"
89
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-15T01:40:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Omarmousa/xlm-roberta-base-finetuned-panx-ar
Omarmousa
"2023-11-26T19:10:22Z"
124
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-11-26T19:02:27Z"
--- tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-ar results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.ar metrics: - name: F1 type: f1 value: 0.8894684900606231 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ar This model is a fine-tuned version of [tner/xlm-roberta-base-panx-dataset-ar](https://huggingface.co/tner/xlm-roberta-base-panx-dataset-ar) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2237 - F1: 0.8895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.234 | 1.0 | 525 | 0.2382 | 0.8587 | | 0.1244 | 2.0 | 1050 | 0.2153 | 0.8844 | | 0.0738 | 3.0 | 1575 | 0.2237 | 0.8895 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.1.0+cu118 - Datasets 1.16.1 - Tokenizers 0.15.0
Kastakin/dqn-SpaceInvadersNoFrameskip-v4
Kastakin
"2022-12-20T16:02:56Z"
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2022-12-20T14:02:23Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 986.00 +/- 315.59 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kastakin -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Kastakin -f logs/ rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Kastakin ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.25), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 3000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
sail-rvc/Tohru_e300_s5400
sail-rvc
"2023-07-14T07:33:20Z"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:33:01Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # Tohru_e300_s5400 ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:33:20 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
z4x/Reinforce-Pixelcopter
z4x
"2023-02-05T21:16:22Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-02-05T21:00:58Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 9.50 +/- 7.63 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
irodrigues/my_awesome_opus_books_model
irodrigues
"2023-05-21T15:00:24Z"
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:opus_books", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-05-21T14:05:57Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus_books metrics: - bleu model-index: - name: my_awesome_opus_books_model results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus_books type: opus_books config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 5.639 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_opus_books_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 1.6052 - Bleu: 5.639 - Gen Len: 17.6262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8603 | 1.0 | 6355 | 1.6285 | 5.4527 | 17.6356 | | 1.8073 | 2.0 | 12710 | 1.6052 | 5.639 | 17.6262 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
JsSparkYyx/flan-t5-base-finetuned-lora-color-3
JsSparkYyx
"2023-11-20T02:56:36Z"
0
0
null
[ "safetensors", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "region:us" ]
null
"2023-11-20T02:56:18Z"
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer model-index: - name: flan-t5-base-finetuned-lora-color-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-finetuned-lora-color-3 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.1 - Pytorch 2.1.1+cu118 - Datasets 2.14.7 - Tokenizers 0.14.1
AlexxxSem/gemma2b-dolly15k-r128
AlexxxSem
"2024-04-26T20:12:28Z"
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "license:gemma", "region:us" ]
null
"2024-04-26T18:47:09Z"
--- license: gemma library_name: peft tags: - trl - sft - generated_from_trainer base_model: google/gemma-2b model-index: - name: gemma2b-dolly15k-r128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma2b-dolly15k-r128 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
DevQuasar/EmTpro01.CodeLlama-7b-java-16bit-GGUF
DevQuasar
"2025-02-01T23:06:34Z"
63
0
null
[ "gguf", "text-generation", "base_model:EmTpro01/CodeLlama-7b-java-16bit", "base_model:quantized:EmTpro01/CodeLlama-7b-java-16bit", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-08T14:38:02Z"
--- base_model: - EmTpro01/CodeLlama-7b-java-16bit pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) 'Make knowledge free for everyone' Quantized version of: [EmTpro01/CodeLlama-7b-java-16bit](https://huggingface.co/EmTpro01/CodeLlama-7b-java-16bit) <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
ivillar/whisperfinetune-cosine
ivillar
"2024-04-30T19:57:43Z"
4
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-04-30T19:57:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
OneFly7/T5-base-finetuned-on-webnlg-train-eredat-Q1-epoch10
OneFly7
"2024-05-27T12:36:29Z"
163
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-05-27T12:35:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sqiangcao/sd-class-butterflies-32
sqiangcao
"2024-01-26T12:06:56Z"
44
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2024-01-26T12:05:59Z"
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('sqiangcao/sd-class-butterflies-32') image = pipeline().images[0] image ```
rahmaabusalma/bert-base-indonesian-1.5G-sentiment-analysis-smsa-tuning
rahmaabusalma
"2024-05-20T08:35:33Z"
110
1
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa", "base_model:finetune:ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-20T08:34:58Z"
--- license: mit base_model: ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa tags: - generated_from_trainer model-index: - name: bert-base-indonesian-1.5G-sentiment-analysis-smsa-tuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-indonesian-1.5G-sentiment-analysis-smsa-tuning This model is a fine-tuned version of [ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa](https://huggingface.co/ayameRushia/bert-base-indonesian-1.5G-sentiment-analysis-smsa) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
impossibleexchange/h75
impossibleexchange
"2025-02-08T19:18:39Z"
26
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-08T19:15:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lhong4759/a0ab8e04-e48a-4a1b-a816-9f873621660b
lhong4759
"2025-01-17T22:10:49Z"
6
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B", "base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B", "license:mit", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-17T20:19:19Z"
--- library_name: peft license: mit base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B tags: - axolotl - generated_from_trainer model-index: - name: a0ab8e04-e48a-4a1b-a816-9f873621660b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ea5f08ab221d8fbb_train_data.json ds_type: json format: custom path: /workspace/input_data/ea5f08ab221d8fbb_train_data.json type: field_instruction: premise field_output: entailment format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lhong4759/a0ab8e04-e48a-4a1b-a816-9f873621660b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/ea5f08ab221d8fbb_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 16143330-1a5b-48c0-b483-592dd437034d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 16143330-1a5b-48c0-b483-592dd437034d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # a0ab8e04-e48a-4a1b-a816-9f873621660b This model is a fine-tuned version of [migtissera/Tess-v2.5-Phi-3-medium-128k-14B](https://huggingface.co/migtissera/Tess-v2.5-Phi-3-medium-128k-14B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6320 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.7287 | 0.0068 | 200 | 0.6320 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
saraleivam/GURU2-paraphrase-multilingual-MiniLM-L12-v2
saraleivam
"2024-06-24T20:57:02Z"
13
1
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:500", "loss:SoftmaxLoss", "arxiv:1908.10084", "base_model:saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2", "base_model:finetune:saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-06-24T20:56:31Z"
--- base_model: saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2 datasets: [] language: [] library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:500 - loss:SoftmaxLoss widget: - source_sentence: Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data. sentences: - Data mining of Clinical Databases - CDSS 1.Data Science.Machine Learning.Understand the Schema of publicly available EHR databases (MIMIC-III). Recognise the International Classification of Diseases (ICD) use. Extract and visualise descriptive statistics from clinical databases. Understand and extract key clinical outcomes such as mortality and stay of length - Natural Language Processing on Google Cloud.Data Science.Machine Learning.Machine Learning, Natural Language Processing, Tensorflow - 'Auditing I: Conceptual Foundations of Auditing.Business.Business Essentials.Accounting, Audit, Critical Thinking, Financial Analysis, Regulations and Compliance, Risk Management, Financial Accounting, General Accounting, Leadership and Management, Finance' - source_sentence: Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data. sentences: - Generando modelos con Auto Machine Learning.Data Science.Machine Learning.Desarrollar modelos utilizando herramientas de Auto Machine Learning. Explorar los datos y hacer el tratamiento para su uso al generar modelos - Professionalism in Allied Health.Personal Development.Personal Development.Gain an understanding of the expectations of an allied healthcare professional in the workplace. Develop and exercise emotional intelligence, self-management, and interpersonal skills. Build and improve internal and external communication skills with all exchanges. Enhance the patient care experience with successful interactions and patient satisfaction - Big Data, Genes, and Medicine.Health.Health Informatics.Big Data, Bioinformatics, Data Analysis, Data Analysis Software, Statistical Programming, Algorithms, Exploratory Data Analysis, Computer Programming - source_sentence: Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data. sentences: - Retail Marketing Strategy.Business.Marketing.Brand Management, Leadership and Management, Marketing, Sales, Strategy, Strategy and Operations, Retail Sales, Retail Store Operations, Data Analysis, E-Commerce - Supporting Veteran Success in Higher Education.Personal Development.Personal Development.Supporting Veteran Success in Higher Education - Advanced AI Techniques for the Supply Chain.Data Science.Machine Learning.Machine Learning, Natural Language Processing - source_sentence: Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data. sentences: - Fundamentals of Flight mechanics.Physical Science and Engineering.Physics and Astronomy.How Mach number can impact stall speed.. Why turboprops consume less than turbojets.. What exactly mean indications given by flight instruments (i.e. anemometer, altimeter). - 'Learn English: Beginning Grammar.Language Learning.Learning English.Writing, Communication' - Product Management Certification.Business.Leadership and Management.Apply key product management skills, tools, and techniques to engage and manage key stakeholders and clients. Identify product strategy development and implementation methods and best practices to ensure the right product is produced. Describe product development and analysis best practices to effectively manage change and ensure a successful product launch. Test what you have learned in a series of practical exercises allowing you to demonstrate real-word product management - source_sentence: Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data. sentences: - 'Python, Bash and SQL Essentials for Data Engineering.Computer Science.Software Development.Develop data engineering solutions with a minimal and essential subset of the Python language and the Linux environment. Design scripts to connect and query a SQL database using Python. Use a scraping library in Python to read, identify and extract data from websites ' - 'AI-Enhanced Content Creation:Elevate Copywriting with Humata.Data Science.Machine Learning.Use prompts in Humata AI to get the information needed to generate an ad copy from the source files. . Create engaging ads and blog posts tailored to your audience with the help of Humata AI prompts. . Create a compelling advertisement for various online platforms using prompt engineering in Humata AI. ' - SQL for Data Science Capstone Project.Data Science.Data Analysis.Develop a project proposal and select your data. Perform descriptive statistics as part of your exploratory analysis. Develop metrics and perform advanced techniques in SQL. Present your findings and make recommendations --- # SentenceTransformer based on saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/saraleivam/GURU-paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 0f16d34e08fc583b71c922dc18d3b14eba17983c --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("saraleivam/GURU2-paraphrase-multilingual-MiniLM-L12-v2") # Run inference sentences = [ 'Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data.', 'Python, Bash and SQL Essentials for Data Engineering.Computer Science.Software Development.Develop data engineering solutions with a minimal and essential subset of the Python language and the Linux environment. Design scripts to connect and query a SQL database using Python. Use a scraping library in Python to read, identify and extract data from websites ', 'AI-Enhanced Content Creation:Elevate Copywriting with Humata.Data Science.Machine Learning.Use prompts in Humata AI to get the information needed to generate an ad copy from the source files. . Create engaging ads and blog posts tailored to your audience with the help of Humata AI prompts. . Create a compelling advertisement for various online platforms using prompt engineering in Humata AI. ', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 500 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | label | |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 77 tokens</li><li>mean: 77.0 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 64.05 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~17.00%</li><li>1: ~25.00%</li><li>2: ~58.00%</li></ul> | * Samples: | sentence1 | sentence2 | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data.</code> | <code>Introduction to Generative AI - 한국어.Information Technology.Cloud Computing.생성형 AI 정의. 생성형 AI의 작동 방식 설명. 생성형 AI 모델 유형 설명. 생성형 AI 애플리케이션 설명</code> | <code>0</code> | | <code>Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data.</code> | <code>Mastering Excel Essentials to Enhance Business Value.Business.Business Essentials.Effectively input data and efficiently navigate large spreadsheets.. Employ various "hacks" and expertly apply (the most appropriate) built-in functions in Excel to increase productivity and streamline workflow.. Apply the "what-if" analysis tools in Excel to conduct break-even analysis, conduct sensitivity analysis and support decision-making.</code> | <code>1</code> | | <code>Servicio consultor SAP MM con experiencia Data Maestra SemiSenior, actualizaciones, referencias, ingles B2, remoto. Que maneje plantillas de SAP Bridge e Ibérico. Con experiencia en ServiceNow. No llegan ni a implementar, ni a ejecutar ni a hacer roll out. Llega a enfocarse en 30% en master data.</code> | <code>Exploring Piano Literature: The Piano Sonata.Arts and Humanities.Music and Art.Identify specific historical time periods in which the popularity of sonatas increases or decreases and the reasons behind these trends. . Identify sonata form. Recognize the most influential pieces in the sonata repertoire. </code> | <code>2</code> | * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) ### Training Hyperparameters #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3.0 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.41.2 - PyTorch: 2.3.1+cu121 - Accelerate: 0.31.0 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers and SoftmaxLoss ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
sivasis-tripathy/Llama-2-7b-chat-midjourney-prompts-2
sivasis-tripathy
"2023-08-09T11:15:35Z"
3
1
peft
[ "peft", "region:us" ]
null
"2023-08-09T11:10:44Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
VERSIL91/8f862e44-82ea-4e1e-bf5c-1a8ef239d495
VERSIL91
"2024-12-29T00:35:52Z"
7
0
peft
[ "peft", "safetensors", "gemma2", "axolotl", "generated_from_trainer", "base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2", "base_model:adapter:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2", "license:gemma", "region:us" ]
null
"2024-12-29T00:28:52Z"
--- library_name: peft license: gemma base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2 tags: - axolotl - generated_from_trainer model-index: - name: 8f862e44-82ea-4e1e-bf5c-1a8ef239d495 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml accelerate_config: dynamo_backend: inductor mixed_precision: bf16 num_machines: 1 num_processes: auto use_cpu: false adapter: lora base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 62746d7cba498e88_train_data.json ds_type: json field: question path: /workspace/input_data/62746d7cba498e88_train_data.json type: completion debug: null deepspeed: null device_map: auto early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 16 gradient_checkpointing: true group_by_length: false hub_model_id: VERSIL91/8f862e44-82ea-4e1e-bf5c-1a8ef239d495 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lora_target_modules: - q_proj - v_proj lr_scheduler: cosine max_memory: 0: 70GiB max_steps: 5 micro_batch_size: 2 mlflow_experiment_name: /tmp/62746d7cba498e88_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true quantization_config: llm_int8_enable_fp32_cpu_offload: true load_in_8bit: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer torch_compile: true train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 8f862e44-82ea-4e1e-bf5c-1a8ef239d495 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 8f862e44-82ea-4e1e-bf5c-1a8ef239d495 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 8f862e44-82ea-4e1e-bf5c-1a8ef239d495 This model is a fine-tuned version of [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.2705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 7.4229 | 0.0429 | 1 | 6.9510 | | 7.0632 | 0.0858 | 2 | 6.8305 | | 6.4964 | 0.1716 | 4 | 6.2705 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
DrishtiSharma/speecht5_finetuned_voxpopuli_es_20k_steps_bs_8
DrishtiSharma
"2023-08-01T21:57:41Z"
80
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:voxpopuli", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
"2023-08-01T20:09:40Z"
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_es_20k_steps results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_es_20k_steps This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4309 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 20000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.4928 | 5.4 | 5000 | 0.4378 | | 0.4567 | 10.8 | 10000 | 0.4332 | | 0.4456 | 16.2 | 15000 | 0.4323 | | 0.4394 | 21.6 | 20000 | 0.4309 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.2 - Tokenizers 0.13.3
mradermacher/Z1-Coder-7B-GGUF
mradermacher
"2025-03-01T08:34:07Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Z1-Coder/Z1-Coder-7B", "base_model:quantized:Z1-Coder/Z1-Coder-7B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-01T08:25:07Z"
--- base_model: Z1-Coder/Z1-Coder-7B language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Z1-Coder/Z1-Coder-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.IQ4_XS.gguf) | IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Z1-Coder-7B-GGUF/resolve/main/Z1-Coder-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
SteffRhes/de_APIS_OEBL_NER_lg
SteffRhes
"2024-12-13T21:35:31Z"
11
0
spacy
[ "spacy", "token-classification", "de", "dataset:SteffRhes/APIS_OEBL__Named_Entity_Recognition", "license:mit", "model-index", "region:us" ]
token-classification
"2023-11-30T17:51:41Z"
--- tags: - spacy - token-classification language: - de model-index: - name: de_APIS_OEBL_NER_lg results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7671428571 - name: NER Recall type: recall value: 0.7902869757 - name: NER F Score type: f_score value: 0.7785429503 license: mit datasets: - SteffRhes/APIS_OEBL__Named_Entity_Recognition library_name: spacy pipeline_tag: token-classification --- | Feature | Description | | --- | --- | | **Name** | `de_APIS_OEBL_NER_lg` | | **Version** | `1.0` | | **spaCy** | `>=3.6.0,<3.7.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `LOC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 77.85 | | `ENTS_P` | 76.71 | | `ENTS_R` | 79.03 | | `TOK2VEC_LOSS` | 13266.36 | | `NER_LOSS` | 378634.81 | ### Sources Trained on data originating from the [APIS project](https://www.oeaw.ac.at/acdh/projects/completed-projects/apis) and the [Austrian Biographical Lexicon (ÖBL)](https://www.oeaw.ac.at/acdh/oebl). Reproducible training context (model m2): https://github.com/acdh-oeaw/veld_chain_7_train/ Dataset available here: https://huggingface.co/datasets/SteffRhes/APIS_OEBL__Named_Entity_Recognition
guydebruyn/dqn-SpaceInvadersNoFrameskip-v4
guydebruyn
"2023-09-14T03:31:42Z"
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-09-14T03:31:03Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 616.00 +/- 136.58 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga guydebruyn -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga guydebruyn -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga guydebruyn ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
epchannel/EpXTTS
epchannel
"2025-04-04T10:13:21Z"
0
0
null
[ "text-to-speech", "vi", "dataset:capleaf/viVoice", "license:other", "region:us" ]
text-to-speech
"2025-04-04T09:57:02Z"
--- license: other license_name: coqui-public-model-license license_link: https://coqui.ai/cpml pipeline_tag: text-to-speech datasets: - capleaf/viVoice language: - vi --- # viⓍTTS viⓍTTS là mô hình tạo sinh giọng nói cho phép bạn sao chép giọng nói sang các ngôn ngữ khác nhau chỉ bằng cách sử dụng một đoạn âm thanh nhanh dài 6 giây. Mô hình này được tiếp tục đào tạo từ mô hình [XTTS-v2.0.3](https://huggingface.co/coqui/XTTS-v2) bằng cách mở rộng tokenizer sang tiếng Việt và huấn luyện trên tập dữ liệu [viVoice](https://huggingface.co/datasets/thinhlpg/viVoice). viⓍTTS is a voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. This model is fine-tuned from the [XTTS-v2.0.3](https://huggingface.co/coqui/XTTS-v2) model by expanding the tokenizer to Vietnamese and fine-tuning on the [viVoice](https://huggingface.co/datasets/thinhlpg/viVoice) dataset. ### Languages viXTTS supports 18 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko) Hindi (hi), **Vietnamese (vi)**. ### Known Limitations - Incompatibility with the [original TTS library](https://github.com/coqui-ai/TTS) (a pull request will be made later). - Subpar performance for input sentences under 10 words in Vietnamese language (yielding inconsistent output and odd trailing sounds). - This model is only fine-tuned in Vietnamese. The model's effectiveness with languages other than Vietnamese hasn't been tested, potentially reducing quality. ### Demo Please checkout [this repo](https://github.com/thinhlpg/vixtts-demo) ### Usage For a quick usage, please checkout [this notebook](https://colab.research.google.com/drive/1q9vA7mDyvK_u0ijDDNuycDoUUbryM3p3?usp=sharing) ### License This model is licensed under [Coqui Public Model License](https://coqui.ai/cpml). ### Contact Fine-tuned by Thinh Le at FPT University HCMC, as a component of [Non La](https://huggingface.co/capleaf)'s graduation thesis. Contact: - You can message me directly on Facebook: <https://fb.com/thinhlpg/> (preferred 🤗) - GitHub: <https://github.com/thinhlpg> - Email: <[email protected]> or <[email protected]>
LHRuig/blasmartin5
LHRuig
"2025-02-02T06:04:43Z"
8
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-02-02T06:04:17Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: blasmartin5 --- # blasmartin5 <Gallery /> ## Model description blasmartin5 lora ## Trigger words You should use `blasmartin5` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/blasmartin5/tree/main) them in the Files & versions tab.
yuighj123/image_classification_covid19
yuighj123
"2024-07-06T07:25:56Z"
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-07-06T07:22:00Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: image_classification_covid19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification_covid19 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the covid-19 datasets dataset. It achieves the following results on the evaluation set: - Loss: 0.2704 - Accuracy: 0.8939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
glif-loradex-trainer/maxxd4240_BlueDraw
glif-loradex-trainer
"2024-12-01T17:21:13Z"
19
2
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
"2024-12-01T17:20:07Z"
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1733073486720__000003000_0.jpg text: ' man with Border Collie in backyard BluD!! ' - output: url: samples/1733073511560__000003000_1.jpg text: 'gorgeous korean woman with white silky long hair and has deer antlers, wears white camisole dress BluD!! ' - output: url: samples/1733073536403__000003000_2.jpg text: Low angle shot of people hugging each other in a circle, leaving a lot of space in the middle BluD!! - output: url: samples/1733073561241__000003000_3.jpg text: beatles abby road album cover BluD!! - output: url: samples/1733073586081__000003000_4.jpg text: joker playing cards BluD!! base_model: black-forest-labs/FLUX.1-dev trigger: BluD!! instance_prompt: BluD!! license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # BlueDraw Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `maxxd4240`. <Gallery /> ## Trigger words You should use `BluD!!` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/maxxd4240_BlueDraw/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
anhvth5/sd15-lora
anhvth5
"2024-06-17T12:19:46Z"
7
0
diffusers
[ "diffusers", "text-to-image", "lora", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2024-06-17T12:07:51Z"
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - diffusers - lora - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: a photo of sks dog --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA DreamBooth - anhvth5/sd15-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
pookie3000/Meta-Llama-3.1-8B-Q5_K_M-GGUF
pookie3000
"2025-02-24T21:11:25Z"
0
0
transformers
[ "transformers", "gguf", "llama-3", "llama", "meta", "facebook", "unsloth", "llama-cpp", "gguf-my-repo", "en", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:quantized:unsloth/Meta-Llama-3.1-8B", "license:llama3.1", "endpoints_compatible", "region:us" ]
null
"2025-02-24T21:10:59Z"
--- language: - en library_name: transformers license: llama3.1 tags: - llama-3 - llama - meta - facebook - unsloth - transformers - llama-cpp - gguf-my-repo base_model: unsloth/Meta-Llama-3.1-8B --- # pookie3000/Meta-Llama-3.1-8B-Q5_K_M-GGUF This model was converted to GGUF format from [`unsloth/Meta-Llama-3.1-8B`](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo pookie3000/Meta-Llama-3.1-8B-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo pookie3000/Meta-Llama-3.1-8B-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo pookie3000/Meta-Llama-3.1-8B-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo pookie3000/Meta-Llama-3.1-8B-Q5_K_M-GGUF --hf-file meta-llama-3.1-8b-q5_k_m.gguf -c 2048 ```
nat-hunt/1b634cee-f976-49d6-b97a-7cce7b9508ae
nat-hunt
"2025-01-30T18:43:32Z"
6
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
"2025-01-30T18:30:42Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 1b634cee-f976-49d6-b97a-7cce7b9508ae results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-0.5B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 386ec04939cf60c6_train_data.json ds_type: json format: custom path: /workspace/input_data/386ec04939cf60c6_train_data.json type: field_input: article field_instruction: ingress field_output: title format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: nat-hunt/1b634cee-f976-49d6-b97a-7cce7b9508ae hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/386ec04939cf60c6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7a82a8fd-4ff9-40db-bc03-36dc2c240a55 wandb_project: Birthday-SN56-4-Gradients-On-Demand wandb_run: your_name wandb_runid: 7a82a8fd-4ff9-40db-bc03-36dc2c240a55 warmup_steps: 5 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1b634cee-f976-49d6-b97a-7cce7b9508ae This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5486 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 5.2045 | | 4.9814 | 0.0006 | 13 | 4.2765 | | 4.2586 | 0.0012 | 26 | 3.6603 | | 3.7714 | 0.0018 | 39 | 3.5486 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Divyansh008/Tiny-Urvashi-v1-Tinyllama
Divyansh008
"2025-03-10T11:18:38Z"
0
0
null
[ "safetensors", "llama", "merge", "mergekit", "lazymergekit", "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "Divyansh008/Tiny-Urvashi-v1-bf16", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "region:us" ]
null
"2025-03-10T10:58:09Z"
--- base_model: - TinyLlama/TinyLlama-1.1B-Chat-v1.0 - Divyansh008/Tiny-Urvashi-v1-bf16 tags: - merge - mergekit - lazymergekit - TinyLlama/TinyLlama-1.1B-Chat-v1.0 - Divyansh008/Tiny-Urvashi-v1-bf16 --- # Tiny-Urvashi-v1-Tinyllama Tiny-Urvashi-v1-Tinyllama is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) * [Divyansh008/Tiny-Urvashi-v1-bf16](https://huggingface.co/Divyansh008/Tiny-Urvashi-v1-bf16) ## 🧩 Configuration ```yaml slices: - sources: - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 layer_range: [0, 22] - model: Divyansh008/Tiny-Urvashi-v1-bf16 layer_range: [0, 22] merge_method: slerp base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.3 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Divyansh008/Tiny-Urvashi-v1-Tinyllama" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
nikoryagin/sae_Qwen_Qwen2.5-7B_resid_post_layer_25_size_16384_batchtopk_x197bh52_lora_a643mtld
nikoryagin
"2025-04-06T20:38:52Z"
0
0
transformers
[ "transformers", "safetensors", "sae", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
"2025-04-06T20:38:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
osiria/minilm-l6-h384-italian-cased
osiria
"2023-12-09T00:11:30Z"
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "it", "arxiv:2012.15828", "arxiv:2010.05609", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-09-30T21:39:44Z"
--- license: mit language: - it --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: MiniLM</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>MiniLMv2</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using <b>mMiniLMv2</b> ([L6xH384 mMiniLMv2](https://github.com/microsoft/unilm/tree/master/minilm)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 23M parameters, a vocabulary of 30.498 tokens, and a size of ~90 MB. <h3>References</h3> [1] https://arxiv.org/abs/2012.15828 [2] https://arxiv.org/abs/2010.05609 <h3>License</h3> The model is released under <b>MIT</b> license
zurandmoro/31fc3548ec7a
zurandmoro
"2025-04-04T19:15:31Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-04T18:52:17Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: 31fc3548ec7a --- # 31Fc3548Ec7A <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `31fc3548ec7a` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "31fc3548ec7a", "lora_weights": "https://huggingface.co/zurandmoro/31fc3548ec7a/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('zurandmoro/31fc3548ec7a', weight_name='lora.safetensors') image = pipeline('31fc3548ec7a').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/zurandmoro/31fc3548ec7a/discussions) to add images that show off what you’ve made with this LoRA.
Nayana-cognitivelab/Nayana-IR-finetune_colpali_v1_2-1k-4bit
Nayana-cognitivelab
"2025-03-06T05:09:18Z"
0
0
transformers
[ "transformers", "safetensors", "colpali", "generated_from_trainer", "base_model:vidore/colpaligemma-3b-pt-448-base", "base_model:finetune:vidore/colpaligemma-3b-pt-448-base", "license:gemma", "endpoints_compatible", "region:us" ]
null
"2025-03-06T05:08:36Z"
--- library_name: transformers license: gemma base_model: vidore/colpaligemma-3b-pt-448-base tags: - colpali - generated_from_trainer model-index: - name: Nayana-IR-finetune_colpali_v1_2-1k-4bit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Nayana-IR-finetune_colpali_v1_2-1k-4bit This model is a fine-tuned version of [vidore/colpaligemma-3b-pt-448-base](https://huggingface.co/vidore/colpaligemma-3b-pt-448-base) on the vidore/vdsid_french dataset. It achieves the following results on the evaluation set: - Loss: 0.0236 - Model Preparation Time: 0.0053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1.5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | |:-------------:|:-----:|:----:|:---------------:|:----------------------:| | No log | 0.016 | 1 | 0.0799 | 0.0053 | ### Framework versions - Transformers 4.47.1 - Pytorch 2.6.0+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
aakarsh-nair/rerun-09-19-2024-experiment-distill-tree-babylm2024-360-2
aakarsh-nair
"2024-09-20T18:20:31Z"
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-20T18:19:36Z"
--- library_name: transformers tags: - generated_from_trainer model-index: - name: rerun-09-19-2024-experiment-distill-tree-babylm2024-360-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rerun-09-19-2024-experiment-distill-tree-babylm2024-360-2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7367 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 5.146 | 1.0 | 2065 | 5.6306 | | 3.1038 | 2.0 | 4130 | 3.4614 | | 2.4109 | 3.0 | 6195 | 2.7661 | | 2.091 | 4.0 | 8260 | 2.3748 | | 1.8375 | 5.0 | 10325 | 2.1678 | | 1.7081 | 6.0 | 12390 | 1.9763 | | 1.5419 | 7.0 | 14455 | 1.8331 | | 1.4752 | 8.0 | 16520 | 1.7660 | | 1.4168 | 9.0 | 18585 | 1.7420 | | 1.4489 | 10.0 | 20650 | 1.7367 | ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 2.4.1+cu121 - Tokenizers 0.19.1
devdatanalytics/irishpotato
devdatanalytics
"2023-09-19T13:50:46Z"
0
0
fastai
[ "fastai", "region:us" ]
null
"2023-09-19T13:50:40Z"
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
d-karpone/speecht5_finetuned_voxpopuli_nl
d-karpone
"2023-08-24T11:02:02Z"
82
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-08-24T09:16:17Z"
--- license: mit tags: - generated_from_trainer - text-to-speech datasets: - facebook/voxpopuli model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.525 | 4.3 | 1000 | 0.4759 | | 0.5035 | 8.61 | 2000 | 0.4628 | | 0.4939 | 12.91 | 3000 | 0.4586 | | 0.4918 | 17.21 | 4000 | 0.4563 | ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
software-vagabond/Reinforce-CartPole-v1
software-vagabond
"2023-04-28T15:23:14Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-04-28T15:23:02Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AlexChe/Reinforce-1
AlexChe
"2022-07-26T14:12:15Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2022-07-26T14:12:08Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - metrics: - type: mean_reward value: 11.40 +/- 7.09 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Salesforce/codegen-6B-multi
Salesforce
"2025-01-31T21:27:38Z"
1,964
20
transformers
[ "transformers", "pytorch", "codegen", "text-generation", "arxiv:2203.13474", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-04-13T00:51:28Z"
--- license: bsd-3-clause --- # CodeGen (CodeGen-Multi 6B) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Multi 6B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 6B* and further pre-trained on a dataset of multiple programming languages, and "6B" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-Multi 6B) was firstly initialized with *CodeGen-NL 6B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-multi") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-multi") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
1a3orn/Llama-3.2-3B-Q4_K_M-GGUF
1a3orn
"2024-10-04T20:47:46Z"
8
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "de", "fr", "it", "pt", "hi", "es", "th", "base_model:meta-llama/Llama-3.2-3B", "base_model:quantized:meta-llama/Llama-3.2-3B", "license:llama3.2", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-04T20:47:35Z"
--- base_model: meta-llama/Llama-3.2-3B language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\ \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\ \ for use, reproduction, distribution and modification of the Llama Materials set\ \ forth herein.\n\n“Documentation” means the specifications, manuals and documentation\ \ accompanying Llama 3.2 distributed by Meta at https://llama.meta.com/doc/overview.\n\ \n“Licensee” or “you” means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\n“Llama 3.2”\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at https://www.llama.com/llama-downloads.\n\n“Llama Materials” means,\ \ collectively, Meta’s proprietary Llama 3.2 and Documentation (and any portion\ \ thereof) made available under this Agreement.\n\n“Meta” or “we” means Meta Platforms\ \ Ireland Limited (if you are located in or, if you are an entity, your principal\ \ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if\ \ you are located outside of the EEA or Switzerland). \n\nBy clicking “I Accept”\ \ below or by using or distributing any portion or element of the Llama Materials,\ \ you agree to be bound by this Agreement.\n\n1. License Rights and Redistribution.\n\ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\ \ and royalty-free limited license under Meta’s intellectual property or other rights\ \ owned by Meta embodied in the Llama Materials to use, reproduce, distribute,\ \ copy, create derivative works of, and make modifications to the Llama Materials.\ \ \nb. Redistribution and Use. \ni. If you distribute or make available the Llama\ \ Materials (or any derivative works thereof), or a product or service (including\ \ another AI model) that contains any of them, you shall (A) provide a copy of this\ \ Agreement with any such Llama Materials; and (B) prominently display “Built with\ \ Llama” on a related website, user interface, blogpost, about page, or product\ \ documentation. If you use the Llama Materials or any outputs or results of the\ \ Llama Materials to create, train, fine tune, or otherwise improve an AI model,\ \ which is distributed or made available, you shall also include “Llama” at the\ \ beginning of any such AI model name.\nii. If you receive Llama Materials, or any\ \ derivative works thereof, from a Licensee as part of an integrated end user product,\ \ then Section 2 of this Agreement will not apply to you. \niii. You must retain\ \ in all copies of the Llama Materials that you distribute the following attribution\ \ notice within a “Notice” text file distributed as a part of such copies: “Llama\ \ 3.2 is licensed under the Llama 3.2 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://www.llama.com/llama3_2/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\n \n2. Additional Commercial Terms. If, on the Llama 3.2\ \ version release date, the monthly active users of the products or services made\ \ available by or for Licensee, or Licensee’s affiliates, is greater than 700 million\ \ monthly active users in the preceding calendar month, you must request a license\ \ from Meta, which Meta may grant to you in its sole discretion, and you are not\ \ authorized to exercise any of the rights under this Agreement unless or until\ \ Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS\ \ REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM\ \ ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS\ \ ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR\ \ PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING\ \ OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR\ \ USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability.\ \ IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY,\ \ WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING\ \ OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,\ \ INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE\ \ BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\n\ a. No trademark licenses are granted under this Agreement, and in connection with\ \ the Llama Materials, neither Meta nor Licensee may use any name or mark owned\ \ by or associated with the other or any of its affiliates, except as required\ \ for reasonable and customary use in describing and redistributing the Llama Materials\ \ or as set forth in this Section 5(a). Meta hereby grants you a license to use\ \ “Llama” (the “Mark”) solely as required to comply with the last sentence of Section\ \ 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at\ \ https://about.meta.com/brand/resources/meta/company-brand/). All goodwill arising\ \ out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to\ \ Meta’s ownership of Llama Materials and derivatives made by or for Meta, with\ \ respect to any derivative works and modifications of the Llama Materials that\ \ are made by you, as between you and Meta, you are and will be the owner of such\ \ derivative works and modifications.\nc. If you institute litigation or other proceedings\ \ against Meta or any entity (including a cross-claim or counterclaim in a lawsuit)\ \ alleging that the Llama Materials or Llama 3.2 outputs or results, or any portion\ \ of any of the foregoing, constitutes infringement of intellectual property or\ \ other rights owned or licensable by you, then any licenses granted to you under\ \ this Agreement shall terminate as of the date such litigation or claim is filed\ \ or instituted. You will indemnify and hold harmless Meta from and against any\ \ claim by any third party arising out of or related to your use or distribution\ \ of the Llama Materials.\n6. Term and Termination. The term of this Agreement will\ \ commence upon your acceptance of this Agreement or access to the Llama Materials\ \ and will continue in full force and effect until terminated in accordance with\ \ the terms and conditions herein. Meta may terminate this Agreement if you are\ \ in breach of any term or condition of this Agreement. Upon termination of this\ \ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,\ \ 4 and 7 shall survive the termination of this Agreement. \n7. Governing Law and\ \ Jurisdiction. This Agreement will be governed and construed under the laws of\ \ the State of California without regard to choice of law principles, and the UN\ \ Convention on Contracts for the International Sale of Goods does not apply to\ \ this Agreement. The courts of California shall have exclusive jurisdiction of\ \ any dispute arising out of this Agreement. \n### Llama 3.2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 3.2. If you access or use Llama 3.2, you agree to this Acceptable Use Policy\ \ (“**Policy**”). The most recent copy of this policy can be found at [https://www.llama.com/llama3_2/use-policy](https://www.llama.com/llama3_2/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 3.2 safely and responsibly.\ \ You agree you will not use, or allow others to use, Llama 3.2 to:\n1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 1. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 2. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 3.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 4. Collect, process, disclose, generate, or infer private or sensitive\ \ information about individuals, including information about individuals’ identity,\ \ health, or demographic information, unless you have obtained the right to do so\ \ in accordance with applicable law\n 5. Engage in or facilitate any action or\ \ generate any content that infringes, misappropriates, or otherwise violates any\ \ third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 6. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n 7. Engage in any action, or\ \ facilitate any action, to intentionally circumvent or remove usage restrictions\ \ or other safety measures, or to enable functionality disabled by Meta \n2. Engage\ \ in, promote, incite, facilitate, or assist in the planning or development of activities\ \ that present a risk of death or bodily harm to individuals, including use of Llama\ \ 3.2 related to the following:\n 8. Military, warfare, nuclear industries or\ \ applications, espionage, use for materials or activities that are subject to the\ \ International Traffic Arms Regulations (ITAR) maintained by the United States\ \ Department of State or to the U.S. Biological Weapons Anti-Terrorism Act of 1989\ \ or the Chemical Weapons Convention Implementation Act of 1997\n 9. Guns and\ \ illegal weapons (including weapon development)\n 10. Illegal drugs and regulated/controlled\ \ substances\n 11. Operation of critical infrastructure, transportation technologies,\ \ or heavy machinery\n 12. Self-harm or harm to others, including suicide, cutting,\ \ and eating disorders\n 13. Any content intended to incite or promote violence,\ \ abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive\ \ or mislead others, including use of Llama 3.2 related to the following:\n 14.\ \ Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n\ \ 15. Generating, promoting, or furthering defamatory content, including the\ \ creation of defamatory statements, images, or other content\n 16. Generating,\ \ promoting, or further distributing spam\n 17. Impersonating another individual\ \ without consent, authorization, or legal right\n 18. Representing that the\ \ use of Llama 3.2 or outputs are human-generated\n 19. Generating or facilitating\ \ false online engagement, including fake reviews and other means of fake online\ \ engagement \n4. Fail to appropriately disclose to end users any known dangers\ \ of your AI system 5. Interact with third party tools, models, or software designed\ \ to generate unlawful content or engage in unlawful or harmful conduct and/or represent\ \ that the outputs of such tools, models, or software are associated with Meta or\ \ Llama 3.2\n\nWith respect to any multimodal models included in Llama 3.2, the\ \ rights granted under Section 1(a) of the Llama 3.2 Community License Agreement\ \ are not being granted to you if you are an individual domiciled in, or a company\ \ with a principal place of business in, the European Union. This restriction does\ \ not apply to end users of a product or service that incorporates any such multimodal\ \ models.\n\nPlease report any violation of this Policy, software “bug,” or other\ \ problems that could lead to a violation of this Policy through one of the following\ \ means:\n\n* Reporting issues with the model: [https://github.com/meta-llama/llama-models/issues](https://l.workplace.com/l.php?u=https%3A%2F%2Fgithub.com%2Fmeta-llama%2Fllama-models%2Fissues&h=AT0qV8W9BFT6NwihiOHRuKYQM_UnkzN_NmHMy91OT55gkLpgi4kQupHUl0ssR4dQsIQ8n3tfd0vtkobvsEvt1l4Ic6GXI2EeuHV8N08OG2WnbAmm0FL4ObkazC6G_256vN0lN9DsykCvCqGZ)\n\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\n\ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama\ \ 3.2: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text Job title: type: select options: - Student - Research Graduate - AI researcher - AI developer/engineer - Reporter - Other geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # 1a3orn/Llama-3.2-3B-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Llama-3.2-3B`](https://huggingface.co/meta-llama/Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.2-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo 1a3orn/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo 1a3orn/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo 1a3orn/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo 1a3orn/Llama-3.2-3B-Q4_K_M-GGUF --hf-file llama-3.2-3b-q4_k_m.gguf -c 2048 ```
Satyake/tiny-chatbot-dpo
Satyake
"2024-05-26T14:55:13Z"
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
"2024-05-26T14:53:05Z"
--- license: apache-2.0 library_name: peft tags: - trl - dpo - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: tiny-chatbot-dpo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-chatbot-dpo This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
zandfj/LLaMA2-7B-Chat-sft-042615-moren
zandfj
"2024-04-26T07:55:08Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-04-26T07:53:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DKYoon/mt5-base-lm-adapt
DKYoon
"2023-09-05T05:07:45Z"
114
0
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "arxiv:2205.12647", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-04-13T18:43:07Z"
--- license: apache-2.0 --- 🤗 Language model initialized from mT5 and trained for an additional 100K steps on the Prefix LM objective using mC4 data. Paper: [Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation](https://arxiv.org/abs/2205.12647) Authors: Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant PyTorch port of the original Flax checkpoint at [Google/T5X repository](https://github.com/google-research/t5x).
GuiGel/beto-uncased-flert-context-we-lstm-crf-meddocan
GuiGel
"2022-11-08T07:19:25Z"
6
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "region:us" ]
token-classification
"2022-11-08T07:16:36Z"
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("GuiGel/beto-uncased-flert-context-we-lstm-crf-meddocan") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
esb/whisper-aed-voxpopuli
esb
"2022-10-24T14:48:27Z"
0
0
null
[ "esb", "en", "dataset:esb/datasets", "dataset:facebook/voxpopuli", "region:us" ]
null
"2022-10-24T14:48:10Z"
--- language: - en tags: - esb datasets: - esb/datasets - facebook/voxpopuli --- To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper): ``` pip install git+https://github.com/openai/whisper.git ``` Then execute the command: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esb/datasets" \ --dataset_config_name="voxpopuli" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-voxpopuli" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
huggingtweets/stockstotrade
huggingtweets
"2021-11-19T03:41:39Z"
10
3
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://www.huggingtweets.com/stockstotrade/1637293295111/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/469936583416610816/EZt8Vl04_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">StocksToTrade</div> <div style="text-align: center; font-size: 14px;">@stockstotrade</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from StocksToTrade. | Data | StocksToTrade | | --- | --- | | Tweets downloaded | 3238 | | Retweets | 663 | | Short tweets | 360 | | Tweets kept | 2215 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c33zwruj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stockstotrade's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1upgfq9z) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1upgfq9z/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/stockstotrade') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
NasimB/gpt2-concat-mod-datasets-rarity1-rerun
NasimB
"2023-07-10T02:49:37Z"
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-10T00:33:42Z"
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-mod-datasets-rarity1-rerun results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-mod-datasets-rarity1-rerun This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.0263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7311 | 0.3 | 500 | 5.6497 | | 5.3805 | 0.59 | 1000 | 5.2065 | | 5.0306 | 0.89 | 1500 | 4.9574 | | 4.7526 | 1.18 | 2000 | 4.8142 | | 4.6058 | 1.48 | 2500 | 4.6885 | | 4.4982 | 1.78 | 3000 | 4.5904 | | 4.3593 | 2.07 | 3500 | 4.5261 | | 4.185 | 2.37 | 4000 | 4.4783 | | 4.154 | 2.66 | 4500 | 4.4233 | | 4.1262 | 2.96 | 5000 | 4.3708 | | 3.8986 | 3.26 | 5500 | 4.3804 | | 3.8767 | 3.55 | 6000 | 4.3494 | | 3.8605 | 3.85 | 6500 | 4.3124 | | 3.7194 | 4.14 | 7000 | 4.3395 | | 3.5981 | 4.44 | 7500 | 4.3194 | | 3.5952 | 4.74 | 8000 | 4.3059 | | 3.5511 | 5.03 | 8500 | 4.3089 | | 3.3393 | 5.33 | 9000 | 4.3236 | | 3.3388 | 5.62 | 9500 | 4.3220 | | 3.3443 | 5.92 | 10000 | 4.3139 | | 3.2213 | 6.22 | 10500 | 4.3304 | | 3.1851 | 6.51 | 11000 | 4.3313 | | 3.1911 | 6.81 | 11500 | 4.3317 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
MohamedAhmedAE/phi-2-finetuned-gsm8k
MohamedAhmedAE
"2023-12-14T10:51:40Z"
1
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:other", "region:us" ]
null
"2023-12-14T10:05:15Z"
--- license: other library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-2 model-index: - name: phi-2-finetuned-gsm8k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-finetuned-gsm8k This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8899 | 0.13 | 500 | 1.0927 | | 0.8948 | 0.27 | 1000 | 1.0892 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
evkes/LLM-falc-deloitte
evkes
"2023-11-13T23:30:29Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded", "region:us" ]
null
"2023-11-13T22:52:22Z"
--- library_name: peft base_model: vilsonrodrigues/falcon-7b-instruct-sharded --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.1
facebook/m2m100-12B-last-ckpt
facebook
"2023-01-24T17:03:07Z"
407
25
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "m2m100-12B", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "arxiv:2010.11125", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-03-12T00:28:28Z"
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit tags: - m2m100-12B --- # M2M100 12B M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-last-ckpt") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-last-ckpt") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
kyoungmiin/style_lr_64
kyoungmiin
"2025-03-01T20:08:33Z"
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-03-01T19:58:59Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: sks widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - kyoungmiin/style_lr_64 <Gallery /> ## Model description These are kyoungmiin/style_lr_64 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use sks to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](kyoungmiin/style_lr_64/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
asdfre453/olio
asdfre453
"2025-03-10T23:13:24Z"
0
0
null
[ "license:other", "region:us" ]
null
"2025-03-10T22:34:31Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
AlignmentResearch/robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10
AlignmentResearch
"2024-04-27T02:51:21Z"
103
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "base_model:finetune:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-04-27T02:50:34Z"
--- license: apache-2.0 tags: - generated_from_trainer base_model: EleutherAI/pythia-410m model-index: - name: robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-410m_mz-132_EnronSpam_n-its-10 This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
LarryAIDraw/satsuki
LarryAIDraw
"2024-02-16T05:40:15Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2024-02-16T05:33:18Z"
--- license: creativeml-openrail-m --- https://civitai.com/models/55245/satsukiblue-archive-or-goofy-ai
Triangle104/gemma-3-4b-it-abliterated-Q5_K_S-GGUF
Triangle104
"2025-03-25T10:42:01Z"
0
0
transformers
[ "transformers", "gguf", "abliterated", "uncensored", "llama-cpp", "gguf-my-repo", "image-text-to-text", "base_model:huihui-ai/gemma-3-4b-it-abliterated", "base_model:quantized:huihui-ai/gemma-3-4b-it-abliterated", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
"2025-03-25T10:41:46Z"
--- base_model: huihui-ai/gemma-3-4b-it-abliterated library_name: transformers license: gemma pipeline_tag: image-text-to-text tags: - abliterated - uncensored - llama-cpp - gguf-my-repo extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # Triangle104/gemma-3-4b-it-abliterated-Q5_K_S-GGUF This model was converted to GGUF format from [`huihui-ai/gemma-3-4b-it-abliterated`](https://huggingface.co/huihui-ai/gemma-3-4b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/huihui-ai/gemma-3-4b-it-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/gemma-3-4b-it-abliterated-Q5_K_S-GGUF --hf-file gemma-3-4b-it-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/gemma-3-4b-it-abliterated-Q5_K_S-GGUF --hf-file gemma-3-4b-it-abliterated-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/gemma-3-4b-it-abliterated-Q5_K_S-GGUF --hf-file gemma-3-4b-it-abliterated-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/gemma-3-4b-it-abliterated-Q5_K_S-GGUF --hf-file gemma-3-4b-it-abliterated-q5_k_s.gguf -c 2048 ```
augustogeog/q-Taxi-v3
augustogeog
"2023-02-08T19:06:32Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-08T19:06:28Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.32 +/- 2.89 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="augustogeog/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
phuongntc/rlhf_thamso_vietbase_4000
phuongntc
"2024-09-21T09:16:32Z"
91
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-09-21T09:15:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed
PrunaAI
"2024-08-02T16:04:09Z"
5
0
transformers
[ "transformers", "llama", "text-generation", "pruna-ai", "base_model:PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed", "base_model:finetune:PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-17T22:52:07Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with [. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash REQUIREMENTS_INSTRUCTIONS ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer IMPORTS MODEL_LOAD tokenizer = AutoTokenizer.from_pretrained("PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model PrunaAI/codellama-CodeLlama-13b-Python-hf-HQQ-2bit-smashed before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
borantorun/bert-base-uncased-finetuned-rte-run_18
borantorun
"2025-04-08T11:31:52Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-08T09:00:49Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-rte-run_18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-rte-run_18 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8850 - Accuracy: 0.6968 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.121682549710445e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 20 | 0.7101 | 0.5054 | | No log | 2.0 | 40 | 0.6264 | 0.6679 | | No log | 3.0 | 60 | 0.6796 | 0.6606 | | No log | 4.0 | 80 | 0.8850 | 0.6968 | | No log | 5.0 | 100 | 1.1607 | 0.6679 | | No log | 6.0 | 120 | 1.3237 | 0.6606 | | No log | 7.0 | 140 | 1.4720 | 0.6534 | | No log | 8.0 | 160 | 1.5483 | 0.6679 | | No log | 9.0 | 180 | 1.7616 | 0.6570 | | No log | 10.0 | 200 | 1.7248 | 0.6534 | | No log | 11.0 | 220 | 1.8424 | 0.6715 | | No log | 12.0 | 240 | 1.8870 | 0.6823 | | No log | 13.0 | 260 | 1.9615 | 0.6715 | | No log | 14.0 | 280 | 1.9907 | 0.6823 | | No log | 15.0 | 300 | 1.9896 | 0.6895 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
llmixer/BigWeave-v16-103b-6.0bpw-h6-exl2
llmixer
"2024-02-10T16:36:03Z"
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "6.0bpw", "h6", "exl2", "conversational", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-06T22:30:24Z"
--- license: llama2 language: - en pipeline_tag: conversational tags: - 6.0bpw - h6 - exl2 --- Exllamav2 6.0bpw h6 quant for [BigWeave-v16-103b](https://huggingface.co/llmixer/BigWeave-v16-103b). Default calibration dataset.
josephloh/donut-receipts75
josephloh
"2024-03-06T02:19:22Z"
8
0
transformers
[ "transformers", "safetensors", "vision-encoder-decoder", "image-text-to-text", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-text-to-text
"2024-03-06T01:57:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
parsa96/distilbert-base-uncased-finetuned-emotion
parsa96
"2023-03-06T04:42:19Z"
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-03-05T06:03:17Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.928 - name: F1 type: f1 value: 0.9281573845269205 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2144 - Accuracy: 0.928 - F1: 0.9282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8343 | 1.0 | 250 | 0.3130 | 0.911 | 0.9087 | | 0.2517 | 2.0 | 500 | 0.2144 | 0.928 | 0.9282 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.12.0 - Datasets 2.9.0 - Tokenizers 0.13.2
tuanna08go/3140d717-9f8f-d728-b7bf-738fd45ce5bb
tuanna08go
"2025-01-10T09:55:22Z"
17
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B-Instruct", "base_model:adapter:unsloth/Llama-3.2-3B-Instruct", "license:llama3.2", "region:us" ]
null
"2025-01-10T09:40:42Z"
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 3140d717-9f8f-d728-b7bf-738fd45ce5bb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 933d19b25ccef737_train_data.json ds_type: json format: custom path: /workspace/input_data/933d19b25ccef737_train_data.json type: field_input: source field_instruction: comment field_output: title format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: tuanna08go/3140d717-9f8f-d728-b7bf-738fd45ce5bb hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/933d19b25ccef737_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: fc17788d-9ed3-4f16-97be-958596881cce wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: fc17788d-9ed3-4f16-97be-958596881cce warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3140d717-9f8f-d728-b7bf-738fd45ce5bb This model is a fine-tuned version of [unsloth/Llama-3.2-3B-Instruct](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 5.4042 | | 4.9817 | 0.0021 | 10 | 5.1537 | | 4.8897 | 0.0042 | 20 | 4.5369 | | 4.4357 | 0.0063 | 30 | 4.3646 | | 4.4064 | 0.0084 | 40 | 4.2992 | | 3.9764 | 0.0105 | 50 | 4.2883 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
bowilleatyou/5e65c3d1-e4e5-4934-9f90-bad537b53f70
bowilleatyou
"2025-03-30T19:33:14Z"
0
0
null
[ "region:us" ]
null
"2025-03-30T19:33:14Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
mradermacher/Acolyte-22B-GGUF
mradermacher
"2024-09-23T18:54:05Z"
32
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:rAIfle/Acolyte-22B", "base_model:quantized:rAIfle/Acolyte-22B", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-09-22T16:35:10Z"
--- base_model: rAIfle/Acolyte-22B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/rAIfle/Acolyte-22B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Acolyte-22B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q2_K.gguf) | Q2_K | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.IQ3_XS.gguf) | IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q3_K_S.gguf) | Q3_K_S | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.IQ3_S.gguf) | IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.IQ3_M.gguf) | IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q3_K_L.gguf) | Q3_K_L | 11.8 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.IQ4_XS.gguf) | IQ4_XS | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q5_K_S.gguf) | Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q5_K_M.gguf) | Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q6_K.gguf) | Q6_K | 18.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Acolyte-22B-GGUF/resolve/main/Acolyte-22B.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Jebadiah/Tess-gradient-ruby-p1
Jebadiah
"2024-05-19T15:17:50Z"
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:Jebadiah/Tess-gradient-ruby", "base_model:merge:Jebadiah/Tess-gradient-ruby", "base_model:defog/llama-3-sqlcoder-8b", "base_model:merge:defog/llama-3-sqlcoder-8b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-19T15:15:52Z"
--- base_model: - defog/llama-3-sqlcoder-8b - Jebadiah/Tess-gradient-ruby library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [Jebadiah/Tess-gradient-ruby](https://huggingface.co/Jebadiah/Tess-gradient-ruby) as a base. ### Models Merged The following models were included in the merge: * [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Jebadiah/Tess-gradient-ruby # No parameters necessary for base model - model: defog/llama-3-sqlcoder-8b parameters: density: 0.5 weight: 0.5 merge_method: dare_linear base_model: Jebadiah/Tess-gradient-ruby parameters: int8_mask: true dtype: bfloat16 ```
mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF
mradermacher
"2024-12-29T12:14:50Z"
14
0
transformers
[ "transformers", "gguf", "Safetensors", "text-generation-inference", "merge", "en", "base_model:MaziyarPanahi/Experiment28M7_Strangemerges_32Experiment28", "base_model:quantized:MaziyarPanahi/Experiment28M7_Strangemerges_32Experiment28", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-12-29T12:07:43Z"
--- base_model: MaziyarPanahi/Experiment28M7_Strangemerges_32Experiment28 language: - en library_name: transformers license: apache-2.0 model_creator: MaziyarPanahi model_name: Experiment28M7_Strangemerges_32Experiment28 quantized_by: mradermacher tags: - Safetensors - text-generation-inference - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/MaziyarPanahi/Experiment28M7_Strangemerges_32Experiment28 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Experiment28M7_Strangemerges_32Experiment28-GGUF/resolve/main/Experiment28M7_Strangemerges_32Experiment28.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Apel-sin/phi-4-abliterated-exl2
Apel-sin
"2025-02-03T17:11:48Z"
8
0
transformers
[ "transformers", "phi", "nlp", "math", "code", "chat", "conversational", "abliterated", "uncensored", "text-generation", "en", "base_model:huihui-ai/phi-4-abliterated", "base_model:finetune:huihui-ai/phi-4-abliterated", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
"2025-02-03T17:09:11Z"
--- license: mit license_link: https://huggingface.co/huihui-ai/phi-4-abliterated/resolve/main/LICENSE language: - en base_model: - huihui-ai/phi-4-abliterated pipeline_tag: text-generation tags: - phi - nlp - math - code - chat - conversational - abliterated - uncensored inference: parameters: temperature: 0 widget: - messages: - role: user content: How should I explain the Internet? library_name: transformers --- # huihui-ai/phi-4-abliterated This is an uncensored version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. ## Use with ollama **Note:** this model requires [Ollama 0.5.5](https://github.com/ollama/ollama/releases/tag/v0.5.5) You can use [huihui_ai/phi4-abliterated](https://ollama.com/huihui_ai/phi4-abliterated) directly ``` ollama run huihui_ai/phi4-abliterated ```
zapparias/pixiv-vit-mae-base
zapparias
"2024-11-29T04:30:20Z"
154
2
transformers
[ "transformers", "safetensors", "vit_mae", "pretraining", "vision", "anime", "image-feature-extraction", "endpoints_compatible", "region:us" ]
image-feature-extraction
"2024-11-29T03:18:23Z"
--- library_name: transformers tags: - vision - anime - image-feature-extraction --- # ViTMAE (base-sized model) pre-trained on Pixiv ViTMAE model pre-trained on Pixiv artworks from id 20 to 100649536. Architecture is the same as [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base), but with a smaller patch size (14) and a larger image size (266). All training was done on TPUs sponsored by [TPU Research Cloud](https://sites.research.google/trc/about/). ## Usage ``` from transformers import AutoImageProcessor, ViTMAEForPreTraining, ViTModel # for resizing images to 266 pixes and normalizing to [-1, 1] processor = AutoImageProcessor.from_pretrained("zapparias/pixiv-vit-mae-base") # load encoder + decoder model = ViTMAEForPreTraining.from_pretrained("zapparias/pixiv-vit-mae-base") # you can also load the encoder into a standard ViT model for feature extraction model = ViTModel.from_pretrained("zapparias/pixiv-vit-mae-base", add_pooling_layer=False) ```
adammandic87/491009d8-1596-49c7-b9f6-26bf8ab5a711
adammandic87
"2025-02-02T22:59:01Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored", "base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored", "license:llama3", "region:us" ]
null
"2025-02-02T22:55:36Z"
--- library_name: peft license: llama3 base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored tags: - axolotl - generated_from_trainer model-index: - name: 491009d8-1596-49c7-b9f6-26bf8ab5a711 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - be5ab324e25875e7_train_data.json ds_type: json format: custom path: /workspace/input_data/be5ab324e25875e7_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: adammandic87/491009d8-1596-49c7-b9f6-26bf8ab5a711 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/be5ab324e25875e7_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 21c17688-3386-4af0-a372-07bbb0501a28 wandb_project: Birthday-SN56-13-Gradients-On-Demand wandb_run: your_name wandb_runid: 21c17688-3386-4af0-a372-07bbb0501a28 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 491009d8-1596-49c7-b9f6-26bf8ab5a711 This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8741 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.7 | 0.0015 | 1 | 5.9562 | | 4.9071 | 0.0726 | 50 | 4.8841 | | 1.4953 | 0.1452 | 100 | 3.2982 | | 3.3976 | 0.2178 | 150 | 2.1256 | | 2.1854 | 0.2904 | 200 | 1.8741 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
viko0123/Flux_Lora
viko0123
"2025-01-27T10:24:05Z"
9
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "license:other", "region:us" ]
text-to-image
"2025-01-27T10:23:54Z"
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: undefined instance_prompt: license: other --- # Flux_Lora <Gallery /> ## Model description ## Trigger words You should use `` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/viko0123/Flux_Lora/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-portrait-trainer](https://fal.ai/models/fal-ai/flux-lora-portrait-trainer).
mradermacher/NeuralRaphael7B-i1-GGUF
mradermacher
"2025-02-03T23:31:34Z"
379
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:itchindigo/NeuralRaphael7B", "base_model:quantized:itchindigo/NeuralRaphael7B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-02-03T19:45:56Z"
--- base_model: itchindigo/NeuralRaphael7B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/itchindigo/NeuralRaphael7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/NeuralRaphael7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralRaphael7B-i1-GGUF/resolve/main/NeuralRaphael7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
kostiantynk-out/097c11c9-1c71-4e77-bddc-629fe0ff9605
kostiantynk-out
"2025-01-24T13:21:08Z"
6
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Maykeye/TinyLLama-v0", "base_model:adapter:Maykeye/TinyLLama-v0", "license:apache-2.0", "region:us" ]
null
"2025-01-24T13:19:14Z"
--- library_name: peft license: apache-2.0 base_model: Maykeye/TinyLLama-v0 tags: - axolotl - generated_from_trainer model-index: - name: 097c11c9-1c71-4e77-bddc-629fe0ff9605 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Maykeye/TinyLLama-v0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 193547730a2e5d0c_train_data.json ds_type: json format: custom path: /workspace/input_data/193547730a2e5d0c_train_data.json type: field_input: tools field_instruction: query field_output: answers format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: kostiantynk-out/097c11c9-1c71-4e77-bddc-629fe0ff9605 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/193547730a2e5d0c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e953d7b0-9ce7-43c2-a622-78e06b2bf500 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e953d7b0-9ce7-43c2-a622-78e06b2bf500 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 097c11c9-1c71-4e77-bddc-629fe0ff9605 This model is a fine-tuned version of [Maykeye/TinyLLama-v0](https://huggingface.co/Maykeye/TinyLLama-v0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 9.8957 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3046 | 0.0001 | 1 | 10.3022 | | 10.1589 | 0.0004 | 3 | 10.2948 | | 9.3958 | 0.0008 | 6 | 10.1709 | | 9.5752 | 0.0013 | 9 | 9.8957 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ZeroWw/gemma-2-2b-it-GGUF
ZeroWw
"2024-08-01T11:02:31Z"
8
0
null
[ "gguf", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-08-01T10:58:31Z"
--- license: mit language: - en pipeline_tag: text-generation --- My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k. Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16. Updated on: Thu Aug 01, 10:58:32
Kquant03/CognitiveFusion-4x7B-GGUF
Kquant03
"2024-01-13T03:00:21Z"
62
11
null
[ "gguf", "merge", "en", "dataset:Open-Orca/OpenOrca", "dataset:Intel/orca_dpo_pairs", "dataset:cognitivecomputations/dolphin", "arxiv:2101.03961", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-01-03T04:26:10Z"
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca - Intel/orca_dpo_pairs - cognitivecomputations/dolphin language: - en tags: - merge --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/QkbFYjmpqCKfCyWnF-rwf.png) (Image credit goes to [NeuralNovel](https://huggingface.co/NeuralNovel)) # Making frankenMoEs more than just a meme...(These are the GGUF files, I cannot quantize my other models properly until llama.cpp is fixed, sorry!) I was approached with the idea to make a merge based on story telling, and considering frankenMoE's tendency to be hallucinatory, I thought that was a wonderful idea. However, I wanted it to be more than just a "meme model". I wanted to make something that would actually work...so we decided to use [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) as a base, [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) as two of the four experts in order to stabilize it, [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) in order to improve its logical reasoning, and [NeuralNovel/Panda-7B-v0.1](https://huggingface.co/NeuralNovel/Panda-7B-v0.1) to improve its creativity and nuanced storytelling mechanics. We believe that this, while it might not be better logically than mixtral base instruct, is definitely more creative. Special thanks to [NeuralNovel](https://huggingface.co/NeuralNovel) for collaborating with me on this project. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/1A1oNsGLUco1Rsv9SYQtX.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/ZpX1KMNYj11k4pF0NX3Q9.png) It performs better than base mixtral 8x across many evaluations. It's half the size and is comparable to most MoEs. Thanks so much to HuggingFace for evaluating it! ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Q2_K Tiny](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 8.06 GB| 10.04 GB | smallest, significant quality loss - not recommended for most purposes | | [Q3_K_M](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 10.50 GB| 12.48 GB | very small, high quality loss | | [Q4_0](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 13.6 GB| 15.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Q4_K_M](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.6 GB| ~15.57 GB | medium, balanced quality - recommended | | [Q5_0](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.6 GB| 18.58 GB | legacy; large, balanced quality | | [Q5_K_M](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | 16.6 GB| ~18.58 GB | large, balanced quality - recommended | | [Q6 XL](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.8 GB| 21.78 GB | very large, extremely low quality loss | | [Q8 XXL](https://huggingface.co/Kquant03/CognitiveFusion-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.7 GB| 27.68 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)" ### (from the MistralAI papers...click the quoted question above to navigate to it directly.) The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps. Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs! A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png) Switch Layer MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961) So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts. Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges: Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting. Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon). If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter. ## "Wait...but you called this a frankenMoE?" The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. There are rumors about someone developing a way for us to unscuff these frankenMoE models by training the router layer simultaneously. For now, frankenMoE remains psychotic. This model does exceedinly well, however. Especially in terms of storywriting compared to mixtral. ## "Are there at least any datasets or plans for this model, in any way?" There are many datasets included as a result of merging four models...for one, Silicon Maid is a merge of xDan which is trained on the [OpenOrca Dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) and the [OpenOrca DPO pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). Loyal-Macaroni-Maid uses OpenChat-3.5, Starling and NeuralChat which has so many datasets I'm not going to list them all here. Dolphin 2.6 Mistral also has a large variety of datasets. Panda-7B-v0.1 was fine tuned by the person collaborating on this project with me using a base mistral and a private dataset. Panda gives the model the creativity it has while the rest act as support. # Results ## Some results from the model's performance. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/gPOIVGSeqsTFiT_0QWGlr.png) Most models answer eternal life...this was a compelling argument given by this model. At lower quants this model will lean towards eternal life. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/Zj45g_V_e5VH95SlPUVZC.png) Considerably better than MythoMax in my opinion... ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/dzfP1qZrOtCCpLmH7U1JP.png) It actually wrote a perfect haiku. This model is so much better than my other frankenMoEs... ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/FosMQSQIieUv0fzS8XP0x.png) ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/KNiQIxuGnBzKWU7xrJWqi.gif) There's a reason I pushed this straight to GGUF right away. I lack compute to make EXL2 or something but perhaps someone else would be interested in that.
nikoryagin/sae_Qwen_Qwen2.5-7B_resid_post_layer_25_size_16384_batchtopk_qqjiu1ue_lora_fpbnrhea
nikoryagin
"2025-04-06T20:37:10Z"
0
0
transformers
[ "transformers", "safetensors", "sae", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
"2025-04-06T20:36:46Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cycloneboy/chinese_mobilebert_base_f4
cycloneboy
"2023-04-02T14:12:32Z"
49
0
transformers
[ "transformers", "pytorch", "pretraining", "zh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2023-04-02T14:03:34Z"
--- language: - zh license: "apache-2.0" --- ## Chinese-MobileBERT > The original [Chinese-MobileBERT](https://github.com/ymcui/Chinese-MobileBERT) repository does not provide pytorch weights, here the weights are converted via the [model_convert](https://github.com/CycloneBoy/model_convert) repository. This repository is developed based on:https://github.com/ymcui/Chinese-MobileBERT You may also be interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. ``` @misc{cui-2022-chinese-mobilebert, title={Chinese MobileBERT}, author={Cui, Yiming}, howpublished={\url{https://github.com/ymcui/Chinese-MobileBERT}}, year={2022} } ```
outlookAi/o6IQK480kl
outlookAi
"2025-01-27T04:19:13Z"
16
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-01-27T03:58:22Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Eve --- # O6Iqk480Kl <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Eve` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/o6IQK480kl', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)